00:00:00.001 Started by upstream project "autotest-per-patch" build number 132391 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.069 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.069 The recommended git tool is: git 00:00:00.070 using credential 00000000-0000-0000-0000-000000000002 00:00:00.075 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.103 Fetching changes from the remote Git repository 00:00:00.105 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.147 Using shallow fetch with depth 1 00:00:00.147 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.147 > git --version # timeout=10 00:00:00.194 > git --version # 'git version 2.39.2' 00:00:00.194 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.241 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.241 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.948 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.960 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.971 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.971 > git config core.sparsecheckout # timeout=10 00:00:04.982 > git read-tree -mu HEAD # timeout=10 00:00:04.997 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.019 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.019 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.117 [Pipeline] Start of Pipeline 00:00:05.130 [Pipeline] library 00:00:05.132 Loading library shm_lib@master 00:00:05.132 Library shm_lib@master is cached. Copying from home. 00:00:05.150 [Pipeline] node 00:00:05.162 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.164 [Pipeline] { 00:00:05.175 [Pipeline] catchError 00:00:05.176 [Pipeline] { 00:00:05.189 [Pipeline] wrap 00:00:05.198 [Pipeline] { 00:00:05.204 [Pipeline] stage 00:00:05.206 [Pipeline] { (Prologue) 00:00:05.449 [Pipeline] sh 00:00:05.727 + logger -p user.info -t JENKINS-CI 00:00:05.745 [Pipeline] echo 00:00:05.747 Node: WFP8 00:00:05.755 [Pipeline] sh 00:00:06.051 [Pipeline] setCustomBuildProperty 00:00:06.062 [Pipeline] echo 00:00:06.064 Cleanup processes 00:00:06.069 [Pipeline] sh 00:00:06.351 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.351 1316586 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.361 [Pipeline] sh 00:00:06.640 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.640 ++ grep -v 'sudo pgrep' 00:00:06.640 ++ awk '{print $1}' 00:00:06.640 + sudo kill -9 00:00:06.640 + true 00:00:06.653 [Pipeline] cleanWs 00:00:06.663 [WS-CLEANUP] Deleting project workspace... 00:00:06.663 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.674 [WS-CLEANUP] done 00:00:06.678 [Pipeline] setCustomBuildProperty 00:00:06.693 [Pipeline] sh 00:00:06.975 + sudo git config --global --replace-all safe.directory '*' 00:00:07.042 [Pipeline] httpRequest 00:00:08.158 [Pipeline] echo 00:00:08.160 Sorcerer 10.211.164.20 is alive 00:00:08.169 [Pipeline] retry 00:00:08.171 [Pipeline] { 00:00:08.183 [Pipeline] httpRequest 00:00:08.187 HttpMethod: GET 00:00:08.188 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.188 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.212 Response Code: HTTP/1.1 200 OK 00:00:08.213 Success: Status code 200 is in the accepted range: 200,404 00:00:08.213 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.615 [Pipeline] } 00:00:13.630 [Pipeline] // retry 00:00:13.636 [Pipeline] sh 00:00:13.916 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.930 [Pipeline] httpRequest 00:00:14.278 [Pipeline] echo 00:00:14.280 Sorcerer 10.211.164.20 is alive 00:00:14.292 [Pipeline] retry 00:00:14.295 [Pipeline] { 00:00:14.312 [Pipeline] httpRequest 00:00:14.316 HttpMethod: GET 00:00:14.317 URL: http://10.211.164.20/packages/spdk_d2ebd983ec796cf3c9bd94783f62b7de1f7bf0f0.tar.gz 00:00:14.317 Sending request to url: http://10.211.164.20/packages/spdk_d2ebd983ec796cf3c9bd94783f62b7de1f7bf0f0.tar.gz 00:00:14.325 Response Code: HTTP/1.1 200 OK 00:00:14.325 Success: Status code 200 is in the accepted range: 200,404 00:00:14.326 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_d2ebd983ec796cf3c9bd94783f62b7de1f7bf0f0.tar.gz 00:06:27.516 [Pipeline] } 00:06:27.533 [Pipeline] // retry 00:06:27.541 [Pipeline] sh 00:06:27.828 + tar --no-same-owner -xf spdk_d2ebd983ec796cf3c9bd94783f62b7de1f7bf0f0.tar.gz 00:06:30.375 [Pipeline] sh 00:06:30.659 + git -C spdk log --oneline -n5 00:06:30.659 d2ebd983e bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:06:30.659 fa4f4fd15 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:06:30.659 b1f0bbae7 nvmf: Expose DIF type of namespace to host again 00:06:30.659 f9d18d578 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:06:30.659 a361eb5e2 nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:06:30.670 [Pipeline] } 00:06:30.681 [Pipeline] // stage 00:06:30.689 [Pipeline] stage 00:06:30.691 [Pipeline] { (Prepare) 00:06:30.708 [Pipeline] writeFile 00:06:30.723 [Pipeline] sh 00:06:31.007 + logger -p user.info -t JENKINS-CI 00:06:31.019 [Pipeline] sh 00:06:31.299 + logger -p user.info -t JENKINS-CI 00:06:31.311 [Pipeline] sh 00:06:31.594 + cat autorun-spdk.conf 00:06:31.594 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:31.594 SPDK_TEST_NVMF=1 00:06:31.594 SPDK_TEST_NVME_CLI=1 00:06:31.594 SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:31.594 SPDK_TEST_NVMF_NICS=e810 00:06:31.594 SPDK_TEST_VFIOUSER=1 00:06:31.594 SPDK_RUN_UBSAN=1 00:06:31.594 NET_TYPE=phy 00:06:31.601 RUN_NIGHTLY=0 00:06:31.606 [Pipeline] readFile 00:06:31.633 [Pipeline] withEnv 00:06:31.635 [Pipeline] { 00:06:31.646 [Pipeline] sh 00:06:31.930 + set -ex 00:06:31.930 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:06:31.930 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:31.930 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:31.930 ++ SPDK_TEST_NVMF=1 00:06:31.930 ++ SPDK_TEST_NVME_CLI=1 00:06:31.930 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:31.930 ++ SPDK_TEST_NVMF_NICS=e810 00:06:31.930 ++ SPDK_TEST_VFIOUSER=1 00:06:31.930 ++ SPDK_RUN_UBSAN=1 00:06:31.930 ++ NET_TYPE=phy 00:06:31.930 ++ RUN_NIGHTLY=0 00:06:31.930 + case $SPDK_TEST_NVMF_NICS in 00:06:31.930 + DRIVERS=ice 00:06:31.930 + [[ tcp == \r\d\m\a ]] 00:06:31.930 + [[ -n ice ]] 00:06:31.930 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:06:31.930 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:06:31.930 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:06:31.930 rmmod: ERROR: Module irdma is not currently loaded 00:06:31.930 rmmod: ERROR: Module i40iw is not currently loaded 00:06:31.930 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:06:31.930 + true 00:06:31.930 + for D in $DRIVERS 00:06:31.930 + sudo modprobe ice 00:06:31.930 + exit 0 00:06:31.939 [Pipeline] } 00:06:31.956 [Pipeline] // withEnv 00:06:31.961 [Pipeline] } 00:06:31.979 [Pipeline] // stage 00:06:31.987 [Pipeline] catchError 00:06:31.989 [Pipeline] { 00:06:32.000 [Pipeline] timeout 00:06:32.001 Timeout set to expire in 1 hr 0 min 00:06:32.002 [Pipeline] { 00:06:32.016 [Pipeline] stage 00:06:32.018 [Pipeline] { (Tests) 00:06:32.031 [Pipeline] sh 00:06:32.316 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:32.316 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:32.316 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:32.316 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:06:32.316 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:32.316 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:06:32.316 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:06:32.316 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:06:32.316 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:06:32.316 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:06:32.316 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:06:32.316 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:32.316 + source /etc/os-release 00:06:32.316 ++ NAME='Fedora Linux' 00:06:32.316 ++ VERSION='39 (Cloud Edition)' 00:06:32.316 ++ ID=fedora 00:06:32.316 ++ VERSION_ID=39 00:06:32.316 ++ VERSION_CODENAME= 00:06:32.316 ++ PLATFORM_ID=platform:f39 00:06:32.316 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:32.316 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:32.316 ++ LOGO=fedora-logo-icon 00:06:32.316 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:32.316 ++ HOME_URL=https://fedoraproject.org/ 00:06:32.316 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:32.316 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:32.316 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:32.316 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:32.316 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:32.316 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:32.316 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:32.316 ++ SUPPORT_END=2024-11-12 00:06:32.316 ++ VARIANT='Cloud Edition' 00:06:32.316 ++ VARIANT_ID=cloud 00:06:32.316 + uname -a 00:06:32.316 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:32.316 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:34.851 Hugepages 00:06:34.851 node hugesize free / total 00:06:34.851 node0 1048576kB 0 / 0 00:06:34.851 node0 2048kB 0 / 0 00:06:34.851 node1 1048576kB 0 / 0 00:06:34.851 node1 2048kB 0 / 0 00:06:34.851 00:06:34.851 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:34.851 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:34.851 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:34.851 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:34.851 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:34.851 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:34.852 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:34.852 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:34.852 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:34.852 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:06:34.852 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:34.852 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:34.852 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:34.852 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:34.852 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:34.852 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:34.852 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:34.852 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:34.852 + rm -f /tmp/spdk-ld-path 00:06:34.852 + source autorun-spdk.conf 00:06:34.852 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:34.852 ++ SPDK_TEST_NVMF=1 00:06:34.852 ++ SPDK_TEST_NVME_CLI=1 00:06:34.852 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:34.852 ++ SPDK_TEST_NVMF_NICS=e810 00:06:34.852 ++ SPDK_TEST_VFIOUSER=1 00:06:34.852 ++ SPDK_RUN_UBSAN=1 00:06:34.852 ++ NET_TYPE=phy 00:06:34.852 ++ RUN_NIGHTLY=0 00:06:34.852 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:34.852 + [[ -n '' ]] 00:06:34.852 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:34.852 + for M in /var/spdk/build-*-manifest.txt 00:06:34.852 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:34.852 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:34.852 + for M in /var/spdk/build-*-manifest.txt 00:06:34.852 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:34.852 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:34.852 + for M in /var/spdk/build-*-manifest.txt 00:06:34.852 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:34.852 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:34.852 ++ uname 00:06:34.852 + [[ Linux == \L\i\n\u\x ]] 00:06:34.852 + sudo dmesg -T 00:06:35.111 + sudo dmesg --clear 00:06:35.111 + dmesg_pid=1318566 00:06:35.111 + [[ Fedora Linux == FreeBSD ]] 00:06:35.111 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:35.111 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:35.111 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:35.111 + [[ -x /usr/src/fio-static/fio ]] 00:06:35.111 + export FIO_BIN=/usr/src/fio-static/fio 00:06:35.111 + FIO_BIN=/usr/src/fio-static/fio 00:06:35.111 + sudo dmesg -Tw 00:06:35.111 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:35.111 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:35.111 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:35.111 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:35.111 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:35.111 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:35.111 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:35.111 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:35.111 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:35.111 14:25:46 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:35.111 14:25:46 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:35.111 14:25:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:35.111 14:25:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:06:35.111 14:25:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:06:35.111 14:25:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:35.111 14:25:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:06:35.111 14:25:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:06:35.111 14:25:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:06:35.111 14:25:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:06:35.111 14:25:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:06:35.111 14:25:46 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:35.111 14:25:46 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:35.111 14:25:47 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:35.111 14:25:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:35.111 14:25:47 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:35.111 14:25:47 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:35.111 14:25:47 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.111 14:25:47 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.111 14:25:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.111 14:25:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.111 14:25:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.111 14:25:47 -- paths/export.sh@5 -- $ export PATH 00:06:35.111 14:25:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.111 14:25:47 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:35.111 14:25:47 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:35.111 14:25:47 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732109147.XXXXXX 00:06:35.111 14:25:47 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732109147.472YJZ 00:06:35.111 14:25:47 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:35.111 14:25:47 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:35.111 14:25:47 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:06:35.112 14:25:47 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:06:35.112 14:25:47 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:06:35.112 14:25:47 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:35.112 14:25:47 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:35.112 14:25:47 -- common/autotest_common.sh@10 -- $ set +x 00:06:35.112 14:25:47 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:06:35.112 14:25:47 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:35.112 14:25:47 -- pm/common@17 -- $ local monitor 00:06:35.112 14:25:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:35.112 14:25:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:35.112 14:25:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:35.112 14:25:47 -- pm/common@21 -- $ date +%s 00:06:35.112 14:25:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:35.112 14:25:47 -- pm/common@21 -- $ date +%s 00:06:35.112 14:25:47 -- pm/common@25 -- $ sleep 1 00:06:35.112 14:25:47 -- pm/common@21 -- $ date +%s 00:06:35.112 14:25:47 -- pm/common@21 -- $ date +%s 00:06:35.112 14:25:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732109147 00:06:35.112 14:25:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732109147 00:06:35.112 14:25:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732109147 00:06:35.112 14:25:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732109147 00:06:35.371 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732109147_collect-cpu-load.pm.log 00:06:35.371 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732109147_collect-vmstat.pm.log 00:06:35.371 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732109147_collect-cpu-temp.pm.log 00:06:35.371 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732109147_collect-bmc-pm.bmc.pm.log 00:06:36.307 14:25:48 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:06:36.307 14:25:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:36.307 14:25:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:36.307 14:25:48 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:36.307 14:25:48 -- spdk/autobuild.sh@16 -- $ date -u 00:06:36.307 Wed Nov 20 01:25:48 PM UTC 2024 00:06:36.307 14:25:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:36.307 v25.01-pre-252-gd2ebd983e 00:06:36.307 14:25:48 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:06:36.307 14:25:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:36.307 14:25:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:36.307 14:25:48 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:36.307 14:25:48 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:36.307 14:25:48 -- common/autotest_common.sh@10 -- $ set +x 00:06:36.307 ************************************ 00:06:36.307 START TEST ubsan 00:06:36.307 ************************************ 00:06:36.307 14:25:48 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:06:36.307 using ubsan 00:06:36.307 00:06:36.307 real 0m0.000s 00:06:36.307 user 0m0.000s 00:06:36.307 sys 0m0.000s 00:06:36.307 14:25:48 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:36.307 14:25:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:36.307 ************************************ 00:06:36.307 END TEST ubsan 00:06:36.307 ************************************ 00:06:36.307 14:25:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:36.307 14:25:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:36.307 14:25:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:36.307 14:25:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:36.307 14:25:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:36.308 14:25:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:36.308 14:25:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:36.308 14:25:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:36.308 14:25:48 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:06:36.566 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:36.566 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:36.825 Using 'verbs' RDMA provider 00:06:49.978 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:07:02.184 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:07:02.184 Creating mk/config.mk...done. 00:07:02.184 Creating mk/cc.flags.mk...done. 00:07:02.184 Type 'make' to build. 00:07:02.184 14:26:13 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:07:02.184 14:26:13 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:02.184 14:26:13 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:02.184 14:26:13 -- common/autotest_common.sh@10 -- $ set +x 00:07:02.184 ************************************ 00:07:02.184 START TEST make 00:07:02.184 ************************************ 00:07:02.184 14:26:13 make -- common/autotest_common.sh@1129 -- $ make -j96 00:07:02.443 make[1]: Nothing to be done for 'all'. 00:07:03.840 The Meson build system 00:07:03.840 Version: 1.5.0 00:07:03.840 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:07:03.840 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:07:03.840 Build type: native build 00:07:03.840 Project name: libvfio-user 00:07:03.840 Project version: 0.0.1 00:07:03.840 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:03.840 C linker for the host machine: cc ld.bfd 2.40-14 00:07:03.840 Host machine cpu family: x86_64 00:07:03.840 Host machine cpu: x86_64 00:07:03.840 Run-time dependency threads found: YES 00:07:03.840 Library dl found: YES 00:07:03.840 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:03.840 Run-time dependency json-c found: YES 0.17 00:07:03.840 Run-time dependency cmocka found: YES 1.1.7 00:07:03.840 Program pytest-3 found: NO 00:07:03.840 Program flake8 found: NO 00:07:03.840 Program misspell-fixer found: NO 00:07:03.840 Program restructuredtext-lint found: NO 00:07:03.840 Program valgrind found: YES (/usr/bin/valgrind) 00:07:03.840 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:03.840 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:03.840 Compiler for C supports arguments -Wwrite-strings: YES 00:07:03.840 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:07:03.840 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:07:03.840 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:07:03.840 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:07:03.840 Build targets in project: 8 00:07:03.840 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:07:03.840 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:07:03.840 00:07:03.840 libvfio-user 0.0.1 00:07:03.840 00:07:03.840 User defined options 00:07:03.840 buildtype : debug 00:07:03.840 default_library: shared 00:07:03.840 libdir : /usr/local/lib 00:07:03.840 00:07:03.840 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:04.097 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:07:04.355 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:07:04.355 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:07:04.355 [3/37] Compiling C object samples/null.p/null.c.o 00:07:04.355 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:07:04.355 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:07:04.355 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:07:04.355 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:07:04.355 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:07:04.355 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:07:04.355 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:07:04.355 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:07:04.355 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:07:04.355 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:07:04.355 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:07:04.355 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:07:04.355 [16/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:07:04.355 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:07:04.355 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:07:04.355 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:07:04.355 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:07:04.355 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:07:04.355 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:07:04.355 [23/37] Compiling C object samples/client.p/client.c.o 00:07:04.355 [24/37] Compiling C object samples/server.p/server.c.o 00:07:04.355 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:07:04.355 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:07:04.355 [27/37] Linking target samples/client 00:07:04.355 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:07:04.613 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:07:04.613 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:07:04.613 [31/37] Linking target test/unit_tests 00:07:04.613 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:07:04.613 [33/37] Linking target samples/server 00:07:04.613 [34/37] Linking target samples/null 00:07:04.613 [35/37] Linking target samples/gpio-pci-idio-16 00:07:04.613 [36/37] Linking target samples/lspci 00:07:04.613 [37/37] Linking target samples/shadow_ioeventfd_server 00:07:04.613 INFO: autodetecting backend as ninja 00:07:04.613 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:07:04.871 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:07:05.130 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:07:05.130 ninja: no work to do. 00:07:10.403 The Meson build system 00:07:10.403 Version: 1.5.0 00:07:10.403 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:07:10.403 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:07:10.403 Build type: native build 00:07:10.403 Program cat found: YES (/usr/bin/cat) 00:07:10.403 Project name: DPDK 00:07:10.403 Project version: 24.03.0 00:07:10.403 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:10.403 C linker for the host machine: cc ld.bfd 2.40-14 00:07:10.403 Host machine cpu family: x86_64 00:07:10.403 Host machine cpu: x86_64 00:07:10.403 Message: ## Building in Developer Mode ## 00:07:10.403 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:10.403 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:07:10.403 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:10.403 Program python3 found: YES (/usr/bin/python3) 00:07:10.403 Program cat found: YES (/usr/bin/cat) 00:07:10.403 Compiler for C supports arguments -march=native: YES 00:07:10.403 Checking for size of "void *" : 8 00:07:10.403 Checking for size of "void *" : 8 (cached) 00:07:10.403 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:10.403 Library m found: YES 00:07:10.403 Library numa found: YES 00:07:10.403 Has header "numaif.h" : YES 00:07:10.403 Library fdt found: NO 00:07:10.403 Library execinfo found: NO 00:07:10.403 Has header "execinfo.h" : YES 00:07:10.403 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:10.403 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:10.403 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:10.403 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:10.403 Run-time dependency openssl found: YES 3.1.1 00:07:10.403 Run-time dependency libpcap found: YES 1.10.4 00:07:10.403 Has header "pcap.h" with dependency libpcap: YES 00:07:10.403 Compiler for C supports arguments -Wcast-qual: YES 00:07:10.403 Compiler for C supports arguments -Wdeprecated: YES 00:07:10.403 Compiler for C supports arguments -Wformat: YES 00:07:10.403 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:10.403 Compiler for C supports arguments -Wformat-security: NO 00:07:10.403 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:10.403 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:10.403 Compiler for C supports arguments -Wnested-externs: YES 00:07:10.403 Compiler for C supports arguments -Wold-style-definition: YES 00:07:10.403 Compiler for C supports arguments -Wpointer-arith: YES 00:07:10.403 Compiler for C supports arguments -Wsign-compare: YES 00:07:10.403 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:10.403 Compiler for C supports arguments -Wundef: YES 00:07:10.403 Compiler for C supports arguments -Wwrite-strings: YES 00:07:10.403 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:10.403 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:10.403 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:10.403 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:10.403 Program objdump found: YES (/usr/bin/objdump) 00:07:10.403 Compiler for C supports arguments -mavx512f: YES 00:07:10.403 Checking if "AVX512 checking" compiles: YES 00:07:10.403 Fetching value of define "__SSE4_2__" : 1 00:07:10.403 Fetching value of define "__AES__" : 1 00:07:10.403 Fetching value of define "__AVX__" : 1 00:07:10.403 Fetching value of define "__AVX2__" : 1 00:07:10.403 Fetching value of define "__AVX512BW__" : 1 00:07:10.403 Fetching value of define "__AVX512CD__" : 1 00:07:10.403 Fetching value of define "__AVX512DQ__" : 1 00:07:10.403 Fetching value of define "__AVX512F__" : 1 00:07:10.403 Fetching value of define "__AVX512VL__" : 1 00:07:10.403 Fetching value of define "__PCLMUL__" : 1 00:07:10.403 Fetching value of define "__RDRND__" : 1 00:07:10.403 Fetching value of define "__RDSEED__" : 1 00:07:10.403 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:10.403 Fetching value of define "__znver1__" : (undefined) 00:07:10.403 Fetching value of define "__znver2__" : (undefined) 00:07:10.403 Fetching value of define "__znver3__" : (undefined) 00:07:10.403 Fetching value of define "__znver4__" : (undefined) 00:07:10.403 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:10.403 Message: lib/log: Defining dependency "log" 00:07:10.403 Message: lib/kvargs: Defining dependency "kvargs" 00:07:10.403 Message: lib/telemetry: Defining dependency "telemetry" 00:07:10.403 Checking for function "getentropy" : NO 00:07:10.403 Message: lib/eal: Defining dependency "eal" 00:07:10.403 Message: lib/ring: Defining dependency "ring" 00:07:10.403 Message: lib/rcu: Defining dependency "rcu" 00:07:10.403 Message: lib/mempool: Defining dependency "mempool" 00:07:10.403 Message: lib/mbuf: Defining dependency "mbuf" 00:07:10.403 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:10.403 Fetching value of define "__AVX512F__" : 1 (cached) 00:07:10.403 Fetching value of define "__AVX512BW__" : 1 (cached) 00:07:10.403 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:07:10.403 Fetching value of define "__AVX512VL__" : 1 (cached) 00:07:10.403 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:07:10.403 Compiler for C supports arguments -mpclmul: YES 00:07:10.403 Compiler for C supports arguments -maes: YES 00:07:10.403 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:10.403 Compiler for C supports arguments -mavx512bw: YES 00:07:10.403 Compiler for C supports arguments -mavx512dq: YES 00:07:10.403 Compiler for C supports arguments -mavx512vl: YES 00:07:10.403 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:10.403 Compiler for C supports arguments -mavx2: YES 00:07:10.403 Compiler for C supports arguments -mavx: YES 00:07:10.403 Message: lib/net: Defining dependency "net" 00:07:10.403 Message: lib/meter: Defining dependency "meter" 00:07:10.403 Message: lib/ethdev: Defining dependency "ethdev" 00:07:10.403 Message: lib/pci: Defining dependency "pci" 00:07:10.403 Message: lib/cmdline: Defining dependency "cmdline" 00:07:10.403 Message: lib/hash: Defining dependency "hash" 00:07:10.403 Message: lib/timer: Defining dependency "timer" 00:07:10.403 Message: lib/compressdev: Defining dependency "compressdev" 00:07:10.403 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:10.403 Message: lib/dmadev: Defining dependency "dmadev" 00:07:10.403 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:10.403 Message: lib/power: Defining dependency "power" 00:07:10.403 Message: lib/reorder: Defining dependency "reorder" 00:07:10.403 Message: lib/security: Defining dependency "security" 00:07:10.403 Has header "linux/userfaultfd.h" : YES 00:07:10.403 Has header "linux/vduse.h" : YES 00:07:10.403 Message: lib/vhost: Defining dependency "vhost" 00:07:10.403 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:10.403 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:10.403 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:10.403 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:10.403 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:10.403 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:10.403 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:10.403 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:10.403 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:10.403 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:10.403 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:10.403 Configuring doxy-api-html.conf using configuration 00:07:10.403 Configuring doxy-api-man.conf using configuration 00:07:10.403 Program mandb found: YES (/usr/bin/mandb) 00:07:10.403 Program sphinx-build found: NO 00:07:10.403 Configuring rte_build_config.h using configuration 00:07:10.403 Message: 00:07:10.403 ================= 00:07:10.403 Applications Enabled 00:07:10.403 ================= 00:07:10.403 00:07:10.403 apps: 00:07:10.403 00:07:10.403 00:07:10.403 Message: 00:07:10.403 ================= 00:07:10.403 Libraries Enabled 00:07:10.403 ================= 00:07:10.403 00:07:10.403 libs: 00:07:10.403 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:10.403 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:10.403 cryptodev, dmadev, power, reorder, security, vhost, 00:07:10.403 00:07:10.403 Message: 00:07:10.403 =============== 00:07:10.403 Drivers Enabled 00:07:10.403 =============== 00:07:10.403 00:07:10.403 common: 00:07:10.403 00:07:10.403 bus: 00:07:10.403 pci, vdev, 00:07:10.403 mempool: 00:07:10.403 ring, 00:07:10.403 dma: 00:07:10.403 00:07:10.403 net: 00:07:10.403 00:07:10.403 crypto: 00:07:10.403 00:07:10.403 compress: 00:07:10.403 00:07:10.403 vdpa: 00:07:10.403 00:07:10.403 00:07:10.403 Message: 00:07:10.403 ================= 00:07:10.403 Content Skipped 00:07:10.403 ================= 00:07:10.403 00:07:10.403 apps: 00:07:10.403 dumpcap: explicitly disabled via build config 00:07:10.403 graph: explicitly disabled via build config 00:07:10.403 pdump: explicitly disabled via build config 00:07:10.403 proc-info: explicitly disabled via build config 00:07:10.403 test-acl: explicitly disabled via build config 00:07:10.403 test-bbdev: explicitly disabled via build config 00:07:10.403 test-cmdline: explicitly disabled via build config 00:07:10.403 test-compress-perf: explicitly disabled via build config 00:07:10.403 test-crypto-perf: explicitly disabled via build config 00:07:10.403 test-dma-perf: explicitly disabled via build config 00:07:10.403 test-eventdev: explicitly disabled via build config 00:07:10.403 test-fib: explicitly disabled via build config 00:07:10.403 test-flow-perf: explicitly disabled via build config 00:07:10.403 test-gpudev: explicitly disabled via build config 00:07:10.403 test-mldev: explicitly disabled via build config 00:07:10.403 test-pipeline: explicitly disabled via build config 00:07:10.404 test-pmd: explicitly disabled via build config 00:07:10.404 test-regex: explicitly disabled via build config 00:07:10.404 test-sad: explicitly disabled via build config 00:07:10.404 test-security-perf: explicitly disabled via build config 00:07:10.404 00:07:10.404 libs: 00:07:10.404 argparse: explicitly disabled via build config 00:07:10.404 metrics: explicitly disabled via build config 00:07:10.404 acl: explicitly disabled via build config 00:07:10.404 bbdev: explicitly disabled via build config 00:07:10.404 bitratestats: explicitly disabled via build config 00:07:10.404 bpf: explicitly disabled via build config 00:07:10.404 cfgfile: explicitly disabled via build config 00:07:10.404 distributor: explicitly disabled via build config 00:07:10.404 efd: explicitly disabled via build config 00:07:10.404 eventdev: explicitly disabled via build config 00:07:10.404 dispatcher: explicitly disabled via build config 00:07:10.404 gpudev: explicitly disabled via build config 00:07:10.404 gro: explicitly disabled via build config 00:07:10.404 gso: explicitly disabled via build config 00:07:10.404 ip_frag: explicitly disabled via build config 00:07:10.404 jobstats: explicitly disabled via build config 00:07:10.404 latencystats: explicitly disabled via build config 00:07:10.404 lpm: explicitly disabled via build config 00:07:10.404 member: explicitly disabled via build config 00:07:10.404 pcapng: explicitly disabled via build config 00:07:10.404 rawdev: explicitly disabled via build config 00:07:10.404 regexdev: explicitly disabled via build config 00:07:10.404 mldev: explicitly disabled via build config 00:07:10.404 rib: explicitly disabled via build config 00:07:10.404 sched: explicitly disabled via build config 00:07:10.404 stack: explicitly disabled via build config 00:07:10.404 ipsec: explicitly disabled via build config 00:07:10.404 pdcp: explicitly disabled via build config 00:07:10.404 fib: explicitly disabled via build config 00:07:10.404 port: explicitly disabled via build config 00:07:10.404 pdump: explicitly disabled via build config 00:07:10.404 table: explicitly disabled via build config 00:07:10.404 pipeline: explicitly disabled via build config 00:07:10.404 graph: explicitly disabled via build config 00:07:10.404 node: explicitly disabled via build config 00:07:10.404 00:07:10.404 drivers: 00:07:10.404 common/cpt: not in enabled drivers build config 00:07:10.404 common/dpaax: not in enabled drivers build config 00:07:10.404 common/iavf: not in enabled drivers build config 00:07:10.404 common/idpf: not in enabled drivers build config 00:07:10.404 common/ionic: not in enabled drivers build config 00:07:10.404 common/mvep: not in enabled drivers build config 00:07:10.404 common/octeontx: not in enabled drivers build config 00:07:10.404 bus/auxiliary: not in enabled drivers build config 00:07:10.404 bus/cdx: not in enabled drivers build config 00:07:10.404 bus/dpaa: not in enabled drivers build config 00:07:10.404 bus/fslmc: not in enabled drivers build config 00:07:10.404 bus/ifpga: not in enabled drivers build config 00:07:10.404 bus/platform: not in enabled drivers build config 00:07:10.404 bus/uacce: not in enabled drivers build config 00:07:10.404 bus/vmbus: not in enabled drivers build config 00:07:10.404 common/cnxk: not in enabled drivers build config 00:07:10.404 common/mlx5: not in enabled drivers build config 00:07:10.404 common/nfp: not in enabled drivers build config 00:07:10.404 common/nitrox: not in enabled drivers build config 00:07:10.404 common/qat: not in enabled drivers build config 00:07:10.404 common/sfc_efx: not in enabled drivers build config 00:07:10.404 mempool/bucket: not in enabled drivers build config 00:07:10.404 mempool/cnxk: not in enabled drivers build config 00:07:10.404 mempool/dpaa: not in enabled drivers build config 00:07:10.404 mempool/dpaa2: not in enabled drivers build config 00:07:10.404 mempool/octeontx: not in enabled drivers build config 00:07:10.404 mempool/stack: not in enabled drivers build config 00:07:10.404 dma/cnxk: not in enabled drivers build config 00:07:10.404 dma/dpaa: not in enabled drivers build config 00:07:10.404 dma/dpaa2: not in enabled drivers build config 00:07:10.404 dma/hisilicon: not in enabled drivers build config 00:07:10.404 dma/idxd: not in enabled drivers build config 00:07:10.404 dma/ioat: not in enabled drivers build config 00:07:10.404 dma/skeleton: not in enabled drivers build config 00:07:10.404 net/af_packet: not in enabled drivers build config 00:07:10.404 net/af_xdp: not in enabled drivers build config 00:07:10.404 net/ark: not in enabled drivers build config 00:07:10.404 net/atlantic: not in enabled drivers build config 00:07:10.404 net/avp: not in enabled drivers build config 00:07:10.404 net/axgbe: not in enabled drivers build config 00:07:10.404 net/bnx2x: not in enabled drivers build config 00:07:10.404 net/bnxt: not in enabled drivers build config 00:07:10.404 net/bonding: not in enabled drivers build config 00:07:10.404 net/cnxk: not in enabled drivers build config 00:07:10.404 net/cpfl: not in enabled drivers build config 00:07:10.404 net/cxgbe: not in enabled drivers build config 00:07:10.404 net/dpaa: not in enabled drivers build config 00:07:10.404 net/dpaa2: not in enabled drivers build config 00:07:10.404 net/e1000: not in enabled drivers build config 00:07:10.404 net/ena: not in enabled drivers build config 00:07:10.404 net/enetc: not in enabled drivers build config 00:07:10.404 net/enetfec: not in enabled drivers build config 00:07:10.404 net/enic: not in enabled drivers build config 00:07:10.404 net/failsafe: not in enabled drivers build config 00:07:10.404 net/fm10k: not in enabled drivers build config 00:07:10.404 net/gve: not in enabled drivers build config 00:07:10.404 net/hinic: not in enabled drivers build config 00:07:10.404 net/hns3: not in enabled drivers build config 00:07:10.404 net/i40e: not in enabled drivers build config 00:07:10.404 net/iavf: not in enabled drivers build config 00:07:10.404 net/ice: not in enabled drivers build config 00:07:10.404 net/idpf: not in enabled drivers build config 00:07:10.404 net/igc: not in enabled drivers build config 00:07:10.404 net/ionic: not in enabled drivers build config 00:07:10.404 net/ipn3ke: not in enabled drivers build config 00:07:10.404 net/ixgbe: not in enabled drivers build config 00:07:10.404 net/mana: not in enabled drivers build config 00:07:10.404 net/memif: not in enabled drivers build config 00:07:10.404 net/mlx4: not in enabled drivers build config 00:07:10.404 net/mlx5: not in enabled drivers build config 00:07:10.404 net/mvneta: not in enabled drivers build config 00:07:10.404 net/mvpp2: not in enabled drivers build config 00:07:10.404 net/netvsc: not in enabled drivers build config 00:07:10.404 net/nfb: not in enabled drivers build config 00:07:10.404 net/nfp: not in enabled drivers build config 00:07:10.404 net/ngbe: not in enabled drivers build config 00:07:10.404 net/null: not in enabled drivers build config 00:07:10.404 net/octeontx: not in enabled drivers build config 00:07:10.404 net/octeon_ep: not in enabled drivers build config 00:07:10.404 net/pcap: not in enabled drivers build config 00:07:10.404 net/pfe: not in enabled drivers build config 00:07:10.404 net/qede: not in enabled drivers build config 00:07:10.404 net/ring: not in enabled drivers build config 00:07:10.404 net/sfc: not in enabled drivers build config 00:07:10.404 net/softnic: not in enabled drivers build config 00:07:10.404 net/tap: not in enabled drivers build config 00:07:10.404 net/thunderx: not in enabled drivers build config 00:07:10.404 net/txgbe: not in enabled drivers build config 00:07:10.404 net/vdev_netvsc: not in enabled drivers build config 00:07:10.404 net/vhost: not in enabled drivers build config 00:07:10.404 net/virtio: not in enabled drivers build config 00:07:10.404 net/vmxnet3: not in enabled drivers build config 00:07:10.404 raw/*: missing internal dependency, "rawdev" 00:07:10.404 crypto/armv8: not in enabled drivers build config 00:07:10.404 crypto/bcmfs: not in enabled drivers build config 00:07:10.404 crypto/caam_jr: not in enabled drivers build config 00:07:10.404 crypto/ccp: not in enabled drivers build config 00:07:10.404 crypto/cnxk: not in enabled drivers build config 00:07:10.404 crypto/dpaa_sec: not in enabled drivers build config 00:07:10.404 crypto/dpaa2_sec: not in enabled drivers build config 00:07:10.404 crypto/ipsec_mb: not in enabled drivers build config 00:07:10.404 crypto/mlx5: not in enabled drivers build config 00:07:10.404 crypto/mvsam: not in enabled drivers build config 00:07:10.404 crypto/nitrox: not in enabled drivers build config 00:07:10.404 crypto/null: not in enabled drivers build config 00:07:10.404 crypto/octeontx: not in enabled drivers build config 00:07:10.404 crypto/openssl: not in enabled drivers build config 00:07:10.404 crypto/scheduler: not in enabled drivers build config 00:07:10.404 crypto/uadk: not in enabled drivers build config 00:07:10.404 crypto/virtio: not in enabled drivers build config 00:07:10.404 compress/isal: not in enabled drivers build config 00:07:10.404 compress/mlx5: not in enabled drivers build config 00:07:10.404 compress/nitrox: not in enabled drivers build config 00:07:10.404 compress/octeontx: not in enabled drivers build config 00:07:10.404 compress/zlib: not in enabled drivers build config 00:07:10.404 regex/*: missing internal dependency, "regexdev" 00:07:10.404 ml/*: missing internal dependency, "mldev" 00:07:10.404 vdpa/ifc: not in enabled drivers build config 00:07:10.404 vdpa/mlx5: not in enabled drivers build config 00:07:10.404 vdpa/nfp: not in enabled drivers build config 00:07:10.404 vdpa/sfc: not in enabled drivers build config 00:07:10.404 event/*: missing internal dependency, "eventdev" 00:07:10.404 baseband/*: missing internal dependency, "bbdev" 00:07:10.404 gpu/*: missing internal dependency, "gpudev" 00:07:10.404 00:07:10.404 00:07:10.404 Build targets in project: 85 00:07:10.404 00:07:10.404 DPDK 24.03.0 00:07:10.404 00:07:10.404 User defined options 00:07:10.404 buildtype : debug 00:07:10.404 default_library : shared 00:07:10.404 libdir : lib 00:07:10.404 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:10.404 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:10.404 c_link_args : 00:07:10.404 cpu_instruction_set: native 00:07:10.404 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:07:10.404 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:07:10.404 enable_docs : false 00:07:10.405 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:07:10.405 enable_kmods : false 00:07:10.405 max_lcores : 128 00:07:10.405 tests : false 00:07:10.405 00:07:10.405 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:10.978 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:07:10.978 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:10.978 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:10.978 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:10.978 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:10.978 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:10.978 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:10.978 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:10.978 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:10.978 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:10.978 [10/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:10.978 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:10.978 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:11.235 [13/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:11.235 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:11.235 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:11.235 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:11.235 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:11.235 [18/268] Linking static target lib/librte_kvargs.a 00:07:11.235 [19/268] Linking static target lib/librte_log.a 00:07:11.235 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:11.235 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:11.235 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:11.235 [23/268] Linking static target lib/librte_pci.a 00:07:11.235 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:11.496 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:11.496 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:11.496 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:11.496 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:11.496 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:11.496 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:11.496 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:11.496 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:11.496 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:11.496 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:11.496 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:11.496 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:11.496 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:11.496 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:11.496 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:11.496 [40/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:11.496 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:11.496 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:11.496 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:11.496 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:11.496 [45/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:11.496 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:11.496 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:11.496 [48/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:11.497 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:11.497 [50/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:11.497 [51/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:11.497 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:11.497 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:11.497 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:11.497 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:11.497 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:11.497 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:11.497 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:11.497 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:11.497 [60/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:11.497 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:11.497 [62/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:11.497 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:11.497 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:11.497 [65/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:11.497 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:11.497 [67/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:11.497 [68/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:11.497 [69/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:11.497 [70/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:11.497 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:11.497 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:11.497 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:11.497 [74/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:11.497 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:11.497 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:11.497 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:11.497 [78/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:11.497 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:11.497 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:11.497 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:11.497 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:11.497 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:11.497 [84/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:11.497 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:11.756 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:11.756 [87/268] Linking static target lib/librte_meter.a 00:07:11.756 [88/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:11.756 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:11.756 [90/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:11.756 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:11.756 [92/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:11.756 [93/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:11.756 [94/268] Linking static target lib/librte_ring.a 00:07:11.756 [95/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:11.756 [96/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:11.756 [97/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:11.756 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:11.756 [99/268] Linking static target lib/librte_telemetry.a 00:07:11.756 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:11.756 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:11.756 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:11.756 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:11.756 [104/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:11.756 [105/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:11.756 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:11.756 [107/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:11.756 [108/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:11.756 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:11.756 [110/268] Linking static target lib/librte_mempool.a 00:07:11.756 [111/268] Linking static target lib/librte_rcu.a 00:07:11.756 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:11.756 [113/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:11.756 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:11.756 [115/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:11.756 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:11.756 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:11.756 [118/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:11.756 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:11.756 [120/268] Linking static target lib/librte_net.a 00:07:11.756 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:11.756 [122/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:11.756 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:11.756 [124/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:11.756 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:11.756 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:11.757 [127/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:11.757 [128/268] Linking static target lib/librte_eal.a 00:07:11.757 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:11.757 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:11.757 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:11.757 [132/268] Linking static target lib/librte_cmdline.a 00:07:11.757 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:11.757 [134/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:11.757 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:11.757 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:12.016 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:12.016 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.016 [139/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.016 [140/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:12.016 [141/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.016 [142/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:12.016 [143/268] Linking target lib/librte_log.so.24.1 00:07:12.016 [144/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:12.016 [145/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:12.016 [146/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:12.016 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:12.016 [148/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.016 [149/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:12.016 [150/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.016 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:12.016 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:12.016 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:12.016 [154/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:12.016 [155/268] Linking static target lib/librte_dmadev.a 00:07:12.016 [156/268] Linking static target lib/librte_mbuf.a 00:07:12.016 [157/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:12.016 [158/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:12.016 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:12.016 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:12.016 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:12.016 [162/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:12.016 [163/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:12.016 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:12.016 [165/268] Linking static target lib/librte_reorder.a 00:07:12.016 [166/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:12.016 [167/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:12.016 [168/268] Linking static target lib/librte_timer.a 00:07:12.016 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:12.016 [170/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:12.016 [171/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.016 [172/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:12.016 [173/268] Linking static target lib/librte_compressdev.a 00:07:12.016 [174/268] Linking target lib/librte_kvargs.so.24.1 00:07:12.016 [175/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:12.016 [176/268] Linking static target lib/librte_power.a 00:07:12.016 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:12.016 [178/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:12.016 [179/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:12.275 [180/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:12.275 [181/268] Linking target lib/librte_telemetry.so.24.1 00:07:12.275 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:12.275 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:12.275 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:12.275 [185/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:12.275 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:12.275 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:12.275 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:12.275 [189/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:12.275 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:12.275 [191/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:12.275 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:12.275 [193/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:12.275 [194/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:12.275 [195/268] Linking static target lib/librte_security.a 00:07:12.275 [196/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:12.275 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:12.275 [198/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:12.275 [199/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:12.275 [200/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:12.275 [201/268] Linking static target drivers/librte_mempool_ring.a 00:07:12.275 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:12.275 [203/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:12.275 [204/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:12.275 [205/268] Linking static target drivers/librte_bus_vdev.a 00:07:12.275 [206/268] Linking static target lib/librte_hash.a 00:07:12.534 [207/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.534 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:12.534 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:12.534 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:12.534 [211/268] Linking static target drivers/librte_bus_pci.a 00:07:12.534 [212/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.534 [213/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.534 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:12.534 [215/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:12.534 [216/268] Linking static target lib/librte_cryptodev.a 00:07:12.534 [217/268] Linking static target lib/librte_ethdev.a 00:07:12.793 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.793 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.793 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.793 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.793 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:13.051 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:13.051 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:13.052 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:13.362 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:13.362 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:14.299 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:14.299 [229/268] Linking static target lib/librte_vhost.a 00:07:14.557 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.932 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:21.201 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:21.768 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:21.768 [234/268] Linking target lib/librte_eal.so.24.1 00:07:22.027 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:22.027 [236/268] Linking target lib/librte_meter.so.24.1 00:07:22.027 [237/268] Linking target lib/librte_ring.so.24.1 00:07:22.027 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:22.027 [239/268] Linking target lib/librte_pci.so.24.1 00:07:22.027 [240/268] Linking target lib/librte_timer.so.24.1 00:07:22.027 [241/268] Linking target lib/librte_dmadev.so.24.1 00:07:22.027 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:22.027 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:22.027 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:22.286 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:22.286 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:22.286 [247/268] Linking target lib/librte_rcu.so.24.1 00:07:22.286 [248/268] Linking target lib/librte_mempool.so.24.1 00:07:22.286 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:22.286 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:22.286 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:22.286 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:22.286 [253/268] Linking target lib/librte_mbuf.so.24.1 00:07:22.545 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:22.545 [255/268] Linking target lib/librte_compressdev.so.24.1 00:07:22.545 [256/268] Linking target lib/librte_reorder.so.24.1 00:07:22.545 [257/268] Linking target lib/librte_net.so.24.1 00:07:22.545 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:07:22.803 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:22.803 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:22.803 [261/268] Linking target lib/librte_hash.so.24.1 00:07:22.803 [262/268] Linking target lib/librte_security.so.24.1 00:07:22.803 [263/268] Linking target lib/librte_cmdline.so.24.1 00:07:22.803 [264/268] Linking target lib/librte_ethdev.so.24.1 00:07:22.803 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:22.803 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:23.062 [267/268] Linking target lib/librte_power.so.24.1 00:07:23.062 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:23.062 INFO: autodetecting backend as ninja 00:07:23.062 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:07:35.409 CC lib/ut_mock/mock.o 00:07:35.409 CC lib/log/log.o 00:07:35.409 CC lib/log/log_flags.o 00:07:35.409 CC lib/log/log_deprecated.o 00:07:35.409 CC lib/ut/ut.o 00:07:35.409 LIB libspdk_ut_mock.a 00:07:35.409 LIB libspdk_ut.a 00:07:35.409 LIB libspdk_log.a 00:07:35.409 SO libspdk_log.so.7.1 00:07:35.409 SO libspdk_ut_mock.so.6.0 00:07:35.409 SO libspdk_ut.so.2.0 00:07:35.409 SYMLINK libspdk_ut_mock.so 00:07:35.409 SYMLINK libspdk_log.so 00:07:35.409 SYMLINK libspdk_ut.so 00:07:35.409 CC lib/util/base64.o 00:07:35.409 CXX lib/trace_parser/trace.o 00:07:35.409 CC lib/dma/dma.o 00:07:35.409 CC lib/util/bit_array.o 00:07:35.409 CC lib/ioat/ioat.o 00:07:35.409 CC lib/util/cpuset.o 00:07:35.409 CC lib/util/crc16.o 00:07:35.409 CC lib/util/crc32.o 00:07:35.409 CC lib/util/crc32c.o 00:07:35.409 CC lib/util/crc32_ieee.o 00:07:35.409 CC lib/util/crc64.o 00:07:35.409 CC lib/util/dif.o 00:07:35.409 CC lib/util/fd.o 00:07:35.409 CC lib/util/fd_group.o 00:07:35.409 CC lib/util/file.o 00:07:35.409 CC lib/util/hexlify.o 00:07:35.409 CC lib/util/iov.o 00:07:35.409 CC lib/util/math.o 00:07:35.409 CC lib/util/net.o 00:07:35.409 CC lib/util/pipe.o 00:07:35.409 CC lib/util/strerror_tls.o 00:07:35.409 CC lib/util/string.o 00:07:35.410 CC lib/util/uuid.o 00:07:35.410 CC lib/util/xor.o 00:07:35.410 CC lib/util/zipf.o 00:07:35.410 CC lib/util/md5.o 00:07:35.410 CC lib/vfio_user/host/vfio_user_pci.o 00:07:35.410 CC lib/vfio_user/host/vfio_user.o 00:07:35.410 LIB libspdk_dma.a 00:07:35.410 SO libspdk_dma.so.5.0 00:07:35.410 SYMLINK libspdk_dma.so 00:07:35.410 LIB libspdk_ioat.a 00:07:35.410 SO libspdk_ioat.so.7.0 00:07:35.410 SYMLINK libspdk_ioat.so 00:07:35.410 LIB libspdk_vfio_user.a 00:07:35.410 SO libspdk_vfio_user.so.5.0 00:07:35.410 SYMLINK libspdk_vfio_user.so 00:07:35.410 LIB libspdk_util.a 00:07:35.410 SO libspdk_util.so.10.1 00:07:35.410 SYMLINK libspdk_util.so 00:07:35.410 LIB libspdk_trace_parser.a 00:07:35.410 SO libspdk_trace_parser.so.6.0 00:07:35.410 SYMLINK libspdk_trace_parser.so 00:07:35.410 CC lib/idxd/idxd.o 00:07:35.410 CC lib/idxd/idxd_user.o 00:07:35.410 CC lib/vmd/led.o 00:07:35.410 CC lib/vmd/vmd.o 00:07:35.410 CC lib/idxd/idxd_kernel.o 00:07:35.410 CC lib/conf/conf.o 00:07:35.410 CC lib/json/json_parse.o 00:07:35.410 CC lib/json/json_util.o 00:07:35.410 CC lib/json/json_write.o 00:07:35.410 CC lib/rdma_utils/rdma_utils.o 00:07:35.410 CC lib/env_dpdk/env.o 00:07:35.410 CC lib/env_dpdk/memory.o 00:07:35.410 CC lib/env_dpdk/pci.o 00:07:35.410 CC lib/env_dpdk/init.o 00:07:35.410 CC lib/env_dpdk/threads.o 00:07:35.410 CC lib/env_dpdk/pci_ioat.o 00:07:35.410 CC lib/env_dpdk/pci_virtio.o 00:07:35.410 CC lib/env_dpdk/pci_vmd.o 00:07:35.410 CC lib/env_dpdk/pci_idxd.o 00:07:35.410 CC lib/env_dpdk/pci_event.o 00:07:35.410 CC lib/env_dpdk/sigbus_handler.o 00:07:35.410 CC lib/env_dpdk/pci_dpdk.o 00:07:35.410 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:35.410 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:35.669 LIB libspdk_conf.a 00:07:35.669 SO libspdk_conf.so.6.0 00:07:35.669 LIB libspdk_rdma_utils.a 00:07:35.669 LIB libspdk_json.a 00:07:35.669 SO libspdk_rdma_utils.so.1.0 00:07:35.669 SO libspdk_json.so.6.0 00:07:35.669 SYMLINK libspdk_conf.so 00:07:35.669 SYMLINK libspdk_rdma_utils.so 00:07:35.669 SYMLINK libspdk_json.so 00:07:35.927 LIB libspdk_idxd.a 00:07:35.927 LIB libspdk_vmd.a 00:07:35.927 SO libspdk_idxd.so.12.1 00:07:35.927 SO libspdk_vmd.so.6.0 00:07:35.927 SYMLINK libspdk_idxd.so 00:07:35.927 SYMLINK libspdk_vmd.so 00:07:35.927 CC lib/jsonrpc/jsonrpc_server.o 00:07:35.927 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:35.927 CC lib/rdma_provider/common.o 00:07:35.927 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:35.927 CC lib/jsonrpc/jsonrpc_client.o 00:07:35.927 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:36.185 LIB libspdk_rdma_provider.a 00:07:36.185 LIB libspdk_jsonrpc.a 00:07:36.185 SO libspdk_rdma_provider.so.7.0 00:07:36.185 SO libspdk_jsonrpc.so.6.0 00:07:36.185 SYMLINK libspdk_rdma_provider.so 00:07:36.444 SYMLINK libspdk_jsonrpc.so 00:07:36.444 LIB libspdk_env_dpdk.a 00:07:36.444 SO libspdk_env_dpdk.so.15.1 00:07:36.703 SYMLINK libspdk_env_dpdk.so 00:07:36.703 CC lib/rpc/rpc.o 00:07:36.961 LIB libspdk_rpc.a 00:07:36.961 SO libspdk_rpc.so.6.0 00:07:36.961 SYMLINK libspdk_rpc.so 00:07:37.219 CC lib/notify/notify.o 00:07:37.219 CC lib/trace/trace.o 00:07:37.219 CC lib/notify/notify_rpc.o 00:07:37.219 CC lib/trace/trace_flags.o 00:07:37.219 CC lib/trace/trace_rpc.o 00:07:37.219 CC lib/keyring/keyring.o 00:07:37.219 CC lib/keyring/keyring_rpc.o 00:07:37.477 LIB libspdk_notify.a 00:07:37.477 SO libspdk_notify.so.6.0 00:07:37.477 LIB libspdk_keyring.a 00:07:37.477 LIB libspdk_trace.a 00:07:37.477 SO libspdk_keyring.so.2.0 00:07:37.477 SYMLINK libspdk_notify.so 00:07:37.477 SO libspdk_trace.so.11.0 00:07:37.477 SYMLINK libspdk_keyring.so 00:07:37.477 SYMLINK libspdk_trace.so 00:07:38.044 CC lib/thread/thread.o 00:07:38.044 CC lib/thread/iobuf.o 00:07:38.044 CC lib/sock/sock.o 00:07:38.044 CC lib/sock/sock_rpc.o 00:07:38.303 LIB libspdk_sock.a 00:07:38.303 SO libspdk_sock.so.10.0 00:07:38.303 SYMLINK libspdk_sock.so 00:07:38.562 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:38.562 CC lib/nvme/nvme_ctrlr.o 00:07:38.562 CC lib/nvme/nvme_fabric.o 00:07:38.562 CC lib/nvme/nvme_ns_cmd.o 00:07:38.562 CC lib/nvme/nvme_ns.o 00:07:38.562 CC lib/nvme/nvme_pcie_common.o 00:07:38.562 CC lib/nvme/nvme_pcie.o 00:07:38.562 CC lib/nvme/nvme_qpair.o 00:07:38.562 CC lib/nvme/nvme.o 00:07:38.562 CC lib/nvme/nvme_quirks.o 00:07:38.562 CC lib/nvme/nvme_transport.o 00:07:38.562 CC lib/nvme/nvme_discovery.o 00:07:38.562 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:38.562 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:38.562 CC lib/nvme/nvme_tcp.o 00:07:38.562 CC lib/nvme/nvme_opal.o 00:07:38.562 CC lib/nvme/nvme_io_msg.o 00:07:38.562 CC lib/nvme/nvme_poll_group.o 00:07:38.562 CC lib/nvme/nvme_zns.o 00:07:38.562 CC lib/nvme/nvme_stubs.o 00:07:38.562 CC lib/nvme/nvme_auth.o 00:07:38.562 CC lib/nvme/nvme_cuse.o 00:07:38.562 CC lib/nvme/nvme_vfio_user.o 00:07:38.562 CC lib/nvme/nvme_rdma.o 00:07:38.820 LIB libspdk_thread.a 00:07:39.079 SO libspdk_thread.so.11.0 00:07:39.079 SYMLINK libspdk_thread.so 00:07:39.337 CC lib/fsdev/fsdev.o 00:07:39.337 CC lib/fsdev/fsdev_io.o 00:07:39.337 CC lib/fsdev/fsdev_rpc.o 00:07:39.337 CC lib/init/json_config.o 00:07:39.337 CC lib/init/subsystem.o 00:07:39.337 CC lib/init/subsystem_rpc.o 00:07:39.337 CC lib/init/rpc.o 00:07:39.337 CC lib/virtio/virtio.o 00:07:39.337 CC lib/virtio/virtio_vfio_user.o 00:07:39.337 CC lib/virtio/virtio_vhost_user.o 00:07:39.337 CC lib/virtio/virtio_pci.o 00:07:39.337 CC lib/vfu_tgt/tgt_endpoint.o 00:07:39.337 CC lib/blob/blobstore.o 00:07:39.337 CC lib/blob/request.o 00:07:39.337 CC lib/vfu_tgt/tgt_rpc.o 00:07:39.337 CC lib/blob/zeroes.o 00:07:39.337 CC lib/accel/accel.o 00:07:39.337 CC lib/blob/blob_bs_dev.o 00:07:39.337 CC lib/accel/accel_rpc.o 00:07:39.337 CC lib/accel/accel_sw.o 00:07:39.595 LIB libspdk_init.a 00:07:39.595 SO libspdk_init.so.6.0 00:07:39.595 LIB libspdk_virtio.a 00:07:39.595 SYMLINK libspdk_init.so 00:07:39.595 LIB libspdk_vfu_tgt.a 00:07:39.595 SO libspdk_virtio.so.7.0 00:07:39.595 SO libspdk_vfu_tgt.so.3.0 00:07:39.853 SYMLINK libspdk_virtio.so 00:07:39.853 SYMLINK libspdk_vfu_tgt.so 00:07:39.853 LIB libspdk_fsdev.a 00:07:39.853 SO libspdk_fsdev.so.2.0 00:07:39.853 SYMLINK libspdk_fsdev.so 00:07:39.853 CC lib/event/app.o 00:07:39.853 CC lib/event/reactor.o 00:07:39.853 CC lib/event/log_rpc.o 00:07:39.853 CC lib/event/app_rpc.o 00:07:39.853 CC lib/event/scheduler_static.o 00:07:40.112 LIB libspdk_accel.a 00:07:40.112 SO libspdk_accel.so.16.0 00:07:40.112 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:40.371 SYMLINK libspdk_accel.so 00:07:40.371 LIB libspdk_event.a 00:07:40.371 LIB libspdk_nvme.a 00:07:40.371 SO libspdk_event.so.14.0 00:07:40.371 SO libspdk_nvme.so.15.0 00:07:40.371 SYMLINK libspdk_event.so 00:07:40.630 CC lib/bdev/bdev.o 00:07:40.630 CC lib/bdev/bdev_rpc.o 00:07:40.630 CC lib/bdev/bdev_zone.o 00:07:40.630 CC lib/bdev/part.o 00:07:40.630 CC lib/bdev/scsi_nvme.o 00:07:40.630 SYMLINK libspdk_nvme.so 00:07:40.630 LIB libspdk_fuse_dispatcher.a 00:07:40.889 SO libspdk_fuse_dispatcher.so.1.0 00:07:40.889 SYMLINK libspdk_fuse_dispatcher.so 00:07:41.456 LIB libspdk_blob.a 00:07:41.456 SO libspdk_blob.so.11.0 00:07:41.715 SYMLINK libspdk_blob.so 00:07:41.973 CC lib/blobfs/blobfs.o 00:07:41.973 CC lib/blobfs/tree.o 00:07:41.973 CC lib/lvol/lvol.o 00:07:42.541 LIB libspdk_bdev.a 00:07:42.541 SO libspdk_bdev.so.17.0 00:07:42.541 LIB libspdk_blobfs.a 00:07:42.541 SO libspdk_blobfs.so.10.0 00:07:42.541 SYMLINK libspdk_bdev.so 00:07:42.541 LIB libspdk_lvol.a 00:07:42.541 SO libspdk_lvol.so.10.0 00:07:42.541 SYMLINK libspdk_blobfs.so 00:07:42.800 SYMLINK libspdk_lvol.so 00:07:42.800 CC lib/ublk/ublk.o 00:07:42.800 CC lib/ublk/ublk_rpc.o 00:07:42.800 CC lib/nbd/nbd.o 00:07:42.800 CC lib/nbd/nbd_rpc.o 00:07:42.800 CC lib/nvmf/ctrlr.o 00:07:42.800 CC lib/nvmf/ctrlr_discovery.o 00:07:42.800 CC lib/nvmf/ctrlr_bdev.o 00:07:42.800 CC lib/nvmf/subsystem.o 00:07:42.800 CC lib/scsi/dev.o 00:07:42.800 CC lib/nvmf/nvmf.o 00:07:42.800 CC lib/ftl/ftl_core.o 00:07:42.800 CC lib/scsi/lun.o 00:07:42.800 CC lib/nvmf/nvmf_rpc.o 00:07:42.800 CC lib/scsi/port.o 00:07:42.800 CC lib/ftl/ftl_init.o 00:07:42.800 CC lib/nvmf/transport.o 00:07:42.800 CC lib/ftl/ftl_layout.o 00:07:42.800 CC lib/scsi/scsi.o 00:07:42.800 CC lib/nvmf/tcp.o 00:07:42.800 CC lib/scsi/scsi_bdev.o 00:07:42.800 CC lib/ftl/ftl_debug.o 00:07:42.800 CC lib/nvmf/stubs.o 00:07:42.800 CC lib/scsi/scsi_pr.o 00:07:42.800 CC lib/ftl/ftl_io.o 00:07:42.800 CC lib/nvmf/mdns_server.o 00:07:42.800 CC lib/scsi/scsi_rpc.o 00:07:42.800 CC lib/ftl/ftl_sb.o 00:07:42.800 CC lib/ftl/ftl_l2p.o 00:07:42.800 CC lib/scsi/task.o 00:07:42.800 CC lib/nvmf/vfio_user.o 00:07:42.800 CC lib/ftl/ftl_l2p_flat.o 00:07:42.800 CC lib/nvmf/rdma.o 00:07:42.800 CC lib/nvmf/auth.o 00:07:42.800 CC lib/ftl/ftl_nv_cache.o 00:07:42.800 CC lib/ftl/ftl_band.o 00:07:42.800 CC lib/ftl/ftl_band_ops.o 00:07:42.800 CC lib/ftl/ftl_writer.o 00:07:42.800 CC lib/ftl/ftl_rq.o 00:07:42.800 CC lib/ftl/ftl_reloc.o 00:07:42.800 CC lib/ftl/ftl_l2p_cache.o 00:07:42.800 CC lib/ftl/ftl_p2l.o 00:07:42.800 CC lib/ftl/ftl_p2l_log.o 00:07:42.800 CC lib/ftl/mngt/ftl_mngt.o 00:07:42.800 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:42.800 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:42.800 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:42.800 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:42.800 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:42.800 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:42.800 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:42.800 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:42.800 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:42.800 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:42.800 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:42.800 CC lib/ftl/utils/ftl_conf.o 00:07:42.800 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:42.800 CC lib/ftl/utils/ftl_md.o 00:07:43.057 CC lib/ftl/utils/ftl_mempool.o 00:07:43.057 CC lib/ftl/utils/ftl_property.o 00:07:43.057 CC lib/ftl/utils/ftl_bitmap.o 00:07:43.057 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:43.057 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:43.057 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:43.057 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:43.057 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:43.057 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:43.057 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:43.057 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:43.057 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:43.057 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:43.057 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:43.057 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:43.057 CC lib/ftl/base/ftl_base_bdev.o 00:07:43.057 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:43.057 CC lib/ftl/base/ftl_base_dev.o 00:07:43.057 CC lib/ftl/ftl_trace.o 00:07:43.315 LIB libspdk_nbd.a 00:07:43.574 SO libspdk_nbd.so.7.0 00:07:43.574 LIB libspdk_scsi.a 00:07:43.574 SYMLINK libspdk_nbd.so 00:07:43.574 LIB libspdk_ublk.a 00:07:43.574 SO libspdk_scsi.so.9.0 00:07:43.574 SO libspdk_ublk.so.3.0 00:07:43.574 SYMLINK libspdk_scsi.so 00:07:43.574 SYMLINK libspdk_ublk.so 00:07:43.831 CC lib/vhost/vhost.o 00:07:43.831 CC lib/iscsi/conn.o 00:07:43.831 CC lib/vhost/vhost_rpc.o 00:07:43.831 CC lib/iscsi/init_grp.o 00:07:43.831 CC lib/vhost/vhost_scsi.o 00:07:43.831 CC lib/vhost/vhost_blk.o 00:07:43.831 CC lib/iscsi/iscsi.o 00:07:43.831 CC lib/vhost/rte_vhost_user.o 00:07:43.831 CC lib/iscsi/param.o 00:07:43.831 CC lib/iscsi/portal_grp.o 00:07:43.831 CC lib/iscsi/tgt_node.o 00:07:43.831 CC lib/iscsi/iscsi_subsystem.o 00:07:43.831 CC lib/iscsi/iscsi_rpc.o 00:07:43.831 CC lib/iscsi/task.o 00:07:44.089 LIB libspdk_ftl.a 00:07:44.089 SO libspdk_ftl.so.9.0 00:07:44.347 SYMLINK libspdk_ftl.so 00:07:44.606 LIB libspdk_nvmf.a 00:07:44.865 LIB libspdk_vhost.a 00:07:44.865 SO libspdk_nvmf.so.20.0 00:07:44.865 SO libspdk_vhost.so.8.0 00:07:44.865 SYMLINK libspdk_vhost.so 00:07:44.865 SYMLINK libspdk_nvmf.so 00:07:44.865 LIB libspdk_iscsi.a 00:07:45.124 SO libspdk_iscsi.so.8.0 00:07:45.124 SYMLINK libspdk_iscsi.so 00:07:45.693 CC module/env_dpdk/env_dpdk_rpc.o 00:07:45.693 CC module/vfu_device/vfu_virtio.o 00:07:45.693 CC module/vfu_device/vfu_virtio_blk.o 00:07:45.693 CC module/vfu_device/vfu_virtio_scsi.o 00:07:45.693 CC module/vfu_device/vfu_virtio_rpc.o 00:07:45.693 CC module/vfu_device/vfu_virtio_fs.o 00:07:45.693 CC module/accel/ioat/accel_ioat_rpc.o 00:07:45.693 CC module/accel/ioat/accel_ioat.o 00:07:45.693 CC module/accel/dsa/accel_dsa.o 00:07:45.693 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:45.693 CC module/accel/dsa/accel_dsa_rpc.o 00:07:45.693 CC module/blob/bdev/blob_bdev.o 00:07:45.693 CC module/accel/iaa/accel_iaa.o 00:07:45.693 CC module/accel/iaa/accel_iaa_rpc.o 00:07:45.693 LIB libspdk_env_dpdk_rpc.a 00:07:45.693 CC module/accel/error/accel_error.o 00:07:45.693 CC module/accel/error/accel_error_rpc.o 00:07:45.693 CC module/fsdev/aio/fsdev_aio.o 00:07:45.693 CC module/keyring/linux/keyring.o 00:07:45.693 CC module/keyring/linux/keyring_rpc.o 00:07:45.693 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:45.693 CC module/fsdev/aio/linux_aio_mgr.o 00:07:45.693 CC module/sock/posix/posix.o 00:07:45.693 CC module/keyring/file/keyring.o 00:07:45.693 CC module/keyring/file/keyring_rpc.o 00:07:45.693 CC module/scheduler/gscheduler/gscheduler.o 00:07:45.693 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:45.951 SO libspdk_env_dpdk_rpc.so.6.0 00:07:45.951 SYMLINK libspdk_env_dpdk_rpc.so 00:07:45.951 LIB libspdk_scheduler_dynamic.a 00:07:45.951 LIB libspdk_keyring_file.a 00:07:45.951 LIB libspdk_keyring_linux.a 00:07:45.951 LIB libspdk_scheduler_gscheduler.a 00:07:45.951 LIB libspdk_accel_ioat.a 00:07:45.951 LIB libspdk_scheduler_dpdk_governor.a 00:07:45.951 LIB libspdk_accel_iaa.a 00:07:45.951 LIB libspdk_accel_error.a 00:07:45.951 SO libspdk_scheduler_dynamic.so.4.0 00:07:45.951 SO libspdk_scheduler_gscheduler.so.4.0 00:07:45.951 SO libspdk_keyring_file.so.2.0 00:07:45.951 SO libspdk_keyring_linux.so.1.0 00:07:45.951 SO libspdk_accel_ioat.so.6.0 00:07:45.951 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:45.951 SO libspdk_accel_iaa.so.3.0 00:07:45.951 SO libspdk_accel_error.so.2.0 00:07:45.951 LIB libspdk_accel_dsa.a 00:07:45.951 SYMLINK libspdk_scheduler_dynamic.so 00:07:45.951 LIB libspdk_blob_bdev.a 00:07:45.951 SYMLINK libspdk_scheduler_gscheduler.so 00:07:45.951 SYMLINK libspdk_keyring_file.so 00:07:45.951 SYMLINK libspdk_keyring_linux.so 00:07:45.951 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:46.209 SYMLINK libspdk_accel_ioat.so 00:07:46.209 SO libspdk_blob_bdev.so.11.0 00:07:46.209 SO libspdk_accel_dsa.so.5.0 00:07:46.209 SYMLINK libspdk_accel_iaa.so 00:07:46.209 SYMLINK libspdk_accel_error.so 00:07:46.209 SYMLINK libspdk_blob_bdev.so 00:07:46.209 LIB libspdk_vfu_device.a 00:07:46.209 SYMLINK libspdk_accel_dsa.so 00:07:46.209 SO libspdk_vfu_device.so.3.0 00:07:46.209 SYMLINK libspdk_vfu_device.so 00:07:46.468 LIB libspdk_fsdev_aio.a 00:07:46.468 SO libspdk_fsdev_aio.so.1.0 00:07:46.468 LIB libspdk_sock_posix.a 00:07:46.468 SO libspdk_sock_posix.so.6.0 00:07:46.468 SYMLINK libspdk_fsdev_aio.so 00:07:46.468 SYMLINK libspdk_sock_posix.so 00:07:46.468 CC module/bdev/null/bdev_null_rpc.o 00:07:46.468 CC module/bdev/null/bdev_null.o 00:07:46.468 CC module/bdev/delay/vbdev_delay.o 00:07:46.468 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:46.725 CC module/bdev/raid/bdev_raid.o 00:07:46.725 CC module/blobfs/bdev/blobfs_bdev.o 00:07:46.725 CC module/bdev/raid/bdev_raid_rpc.o 00:07:46.725 CC module/bdev/raid/raid1.o 00:07:46.725 CC module/bdev/raid/bdev_raid_sb.o 00:07:46.725 CC module/bdev/nvme/bdev_nvme.o 00:07:46.725 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:46.725 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:46.725 CC module/bdev/nvme/nvme_rpc.o 00:07:46.725 CC module/bdev/raid/raid0.o 00:07:46.725 CC module/bdev/nvme/bdev_mdns_client.o 00:07:46.725 CC module/bdev/raid/concat.o 00:07:46.725 CC module/bdev/aio/bdev_aio.o 00:07:46.725 CC module/bdev/nvme/vbdev_opal.o 00:07:46.725 CC module/bdev/aio/bdev_aio_rpc.o 00:07:46.725 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:46.725 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:46.725 CC module/bdev/gpt/gpt.o 00:07:46.725 CC module/bdev/gpt/vbdev_gpt.o 00:07:46.725 CC module/bdev/error/vbdev_error.o 00:07:46.725 CC module/bdev/error/vbdev_error_rpc.o 00:07:46.725 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:46.725 CC module/bdev/ftl/bdev_ftl.o 00:07:46.725 CC module/bdev/lvol/vbdev_lvol.o 00:07:46.725 CC module/bdev/split/vbdev_split.o 00:07:46.725 CC module/bdev/iscsi/bdev_iscsi.o 00:07:46.725 CC module/bdev/split/vbdev_split_rpc.o 00:07:46.725 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:46.725 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:46.725 CC module/bdev/passthru/vbdev_passthru.o 00:07:46.725 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:46.725 CC module/bdev/malloc/bdev_malloc.o 00:07:46.725 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:46.725 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:46.725 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:46.725 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:46.725 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:46.725 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:46.983 LIB libspdk_blobfs_bdev.a 00:07:46.983 LIB libspdk_bdev_null.a 00:07:46.983 SO libspdk_blobfs_bdev.so.6.0 00:07:46.983 LIB libspdk_bdev_gpt.a 00:07:46.983 LIB libspdk_bdev_split.a 00:07:46.983 SO libspdk_bdev_null.so.6.0 00:07:46.983 SO libspdk_bdev_gpt.so.6.0 00:07:46.983 SO libspdk_bdev_split.so.6.0 00:07:46.983 LIB libspdk_bdev_error.a 00:07:46.983 LIB libspdk_bdev_passthru.a 00:07:46.984 LIB libspdk_bdev_ftl.a 00:07:46.984 SYMLINK libspdk_blobfs_bdev.so 00:07:46.984 LIB libspdk_bdev_zone_block.a 00:07:46.984 SO libspdk_bdev_error.so.6.0 00:07:46.984 LIB libspdk_bdev_delay.a 00:07:46.984 SYMLINK libspdk_bdev_null.so 00:07:46.984 LIB libspdk_bdev_aio.a 00:07:46.984 LIB libspdk_bdev_iscsi.a 00:07:46.984 SYMLINK libspdk_bdev_gpt.so 00:07:46.984 SO libspdk_bdev_passthru.so.6.0 00:07:46.984 SO libspdk_bdev_ftl.so.6.0 00:07:46.984 SYMLINK libspdk_bdev_split.so 00:07:46.984 LIB libspdk_bdev_malloc.a 00:07:46.984 SO libspdk_bdev_zone_block.so.6.0 00:07:46.984 SO libspdk_bdev_delay.so.6.0 00:07:46.984 SO libspdk_bdev_aio.so.6.0 00:07:46.984 SO libspdk_bdev_iscsi.so.6.0 00:07:46.984 SO libspdk_bdev_malloc.so.6.0 00:07:46.984 SYMLINK libspdk_bdev_error.so 00:07:46.984 SYMLINK libspdk_bdev_passthru.so 00:07:46.984 SYMLINK libspdk_bdev_ftl.so 00:07:46.984 SYMLINK libspdk_bdev_zone_block.so 00:07:46.984 SYMLINK libspdk_bdev_delay.so 00:07:46.984 SYMLINK libspdk_bdev_aio.so 00:07:46.984 SYMLINK libspdk_bdev_iscsi.so 00:07:46.984 SYMLINK libspdk_bdev_malloc.so 00:07:47.242 LIB libspdk_bdev_lvol.a 00:07:47.242 LIB libspdk_bdev_virtio.a 00:07:47.242 SO libspdk_bdev_lvol.so.6.0 00:07:47.242 SO libspdk_bdev_virtio.so.6.0 00:07:47.242 SYMLINK libspdk_bdev_lvol.so 00:07:47.242 SYMLINK libspdk_bdev_virtio.so 00:07:47.500 LIB libspdk_bdev_raid.a 00:07:47.500 SO libspdk_bdev_raid.so.6.0 00:07:47.500 SYMLINK libspdk_bdev_raid.so 00:07:48.435 LIB libspdk_bdev_nvme.a 00:07:48.435 SO libspdk_bdev_nvme.so.7.1 00:07:48.694 SYMLINK libspdk_bdev_nvme.so 00:07:49.261 CC module/event/subsystems/vmd/vmd.o 00:07:49.261 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:49.261 CC module/event/subsystems/iobuf/iobuf.o 00:07:49.261 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:49.261 CC module/event/subsystems/sock/sock.o 00:07:49.261 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:07:49.261 CC module/event/subsystems/keyring/keyring.o 00:07:49.261 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:49.261 CC module/event/subsystems/scheduler/scheduler.o 00:07:49.261 CC module/event/subsystems/fsdev/fsdev.o 00:07:49.519 LIB libspdk_event_sock.a 00:07:49.519 LIB libspdk_event_vfu_tgt.a 00:07:49.519 LIB libspdk_event_vmd.a 00:07:49.519 LIB libspdk_event_keyring.a 00:07:49.519 LIB libspdk_event_scheduler.a 00:07:49.519 LIB libspdk_event_vhost_blk.a 00:07:49.519 LIB libspdk_event_fsdev.a 00:07:49.519 LIB libspdk_event_iobuf.a 00:07:49.519 SO libspdk_event_keyring.so.1.0 00:07:49.519 SO libspdk_event_sock.so.5.0 00:07:49.519 SO libspdk_event_vfu_tgt.so.3.0 00:07:49.519 SO libspdk_event_scheduler.so.4.0 00:07:49.519 SO libspdk_event_vmd.so.6.0 00:07:49.519 SO libspdk_event_vhost_blk.so.3.0 00:07:49.519 SO libspdk_event_fsdev.so.1.0 00:07:49.519 SO libspdk_event_iobuf.so.3.0 00:07:49.519 SYMLINK libspdk_event_keyring.so 00:07:49.519 SYMLINK libspdk_event_vfu_tgt.so 00:07:49.519 SYMLINK libspdk_event_sock.so 00:07:49.519 SYMLINK libspdk_event_scheduler.so 00:07:49.519 SYMLINK libspdk_event_vmd.so 00:07:49.519 SYMLINK libspdk_event_vhost_blk.so 00:07:49.519 SYMLINK libspdk_event_fsdev.so 00:07:49.519 SYMLINK libspdk_event_iobuf.so 00:07:49.776 CC module/event/subsystems/accel/accel.o 00:07:50.034 LIB libspdk_event_accel.a 00:07:50.034 SO libspdk_event_accel.so.6.0 00:07:50.034 SYMLINK libspdk_event_accel.so 00:07:50.292 CC module/event/subsystems/bdev/bdev.o 00:07:50.549 LIB libspdk_event_bdev.a 00:07:50.549 SO libspdk_event_bdev.so.6.0 00:07:50.549 SYMLINK libspdk_event_bdev.so 00:07:51.114 CC module/event/subsystems/scsi/scsi.o 00:07:51.114 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:51.114 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:51.114 CC module/event/subsystems/nbd/nbd.o 00:07:51.114 CC module/event/subsystems/ublk/ublk.o 00:07:51.114 LIB libspdk_event_nbd.a 00:07:51.114 LIB libspdk_event_scsi.a 00:07:51.114 LIB libspdk_event_ublk.a 00:07:51.114 SO libspdk_event_scsi.so.6.0 00:07:51.114 SO libspdk_event_nbd.so.6.0 00:07:51.114 SO libspdk_event_ublk.so.3.0 00:07:51.114 LIB libspdk_event_nvmf.a 00:07:51.114 SYMLINK libspdk_event_scsi.so 00:07:51.114 SYMLINK libspdk_event_nbd.so 00:07:51.114 SYMLINK libspdk_event_ublk.so 00:07:51.114 SO libspdk_event_nvmf.so.6.0 00:07:51.372 SYMLINK libspdk_event_nvmf.so 00:07:51.630 CC module/event/subsystems/iscsi/iscsi.o 00:07:51.630 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:51.630 LIB libspdk_event_vhost_scsi.a 00:07:51.630 LIB libspdk_event_iscsi.a 00:07:51.630 SO libspdk_event_vhost_scsi.so.3.0 00:07:51.630 SO libspdk_event_iscsi.so.6.0 00:07:51.889 SYMLINK libspdk_event_vhost_scsi.so 00:07:51.889 SYMLINK libspdk_event_iscsi.so 00:07:51.889 SO libspdk.so.6.0 00:07:51.889 SYMLINK libspdk.so 00:07:52.464 CXX app/trace/trace.o 00:07:52.464 CC app/spdk_lspci/spdk_lspci.o 00:07:52.464 CC app/trace_record/trace_record.o 00:07:52.464 CC app/spdk_top/spdk_top.o 00:07:52.464 CC app/spdk_nvme_discover/discovery_aer.o 00:07:52.464 CC app/spdk_nvme_identify/identify.o 00:07:52.464 TEST_HEADER include/spdk/accel_module.h 00:07:52.464 CC app/spdk_nvme_perf/perf.o 00:07:52.464 CC test/rpc_client/rpc_client_test.o 00:07:52.464 TEST_HEADER include/spdk/accel.h 00:07:52.464 TEST_HEADER include/spdk/barrier.h 00:07:52.464 TEST_HEADER include/spdk/base64.h 00:07:52.464 TEST_HEADER include/spdk/assert.h 00:07:52.464 TEST_HEADER include/spdk/bdev.h 00:07:52.464 TEST_HEADER include/spdk/bdev_zone.h 00:07:52.464 TEST_HEADER include/spdk/bdev_module.h 00:07:52.464 TEST_HEADER include/spdk/bit_array.h 00:07:52.464 TEST_HEADER include/spdk/bit_pool.h 00:07:52.464 TEST_HEADER include/spdk/blob_bdev.h 00:07:52.464 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:52.464 TEST_HEADER include/spdk/blobfs.h 00:07:52.464 TEST_HEADER include/spdk/blob.h 00:07:52.464 TEST_HEADER include/spdk/conf.h 00:07:52.464 TEST_HEADER include/spdk/config.h 00:07:52.464 TEST_HEADER include/spdk/cpuset.h 00:07:52.464 TEST_HEADER include/spdk/crc32.h 00:07:52.464 TEST_HEADER include/spdk/crc16.h 00:07:52.464 TEST_HEADER include/spdk/crc64.h 00:07:52.464 TEST_HEADER include/spdk/dif.h 00:07:52.464 TEST_HEADER include/spdk/endian.h 00:07:52.464 TEST_HEADER include/spdk/dma.h 00:07:52.464 TEST_HEADER include/spdk/env_dpdk.h 00:07:52.464 TEST_HEADER include/spdk/event.h 00:07:52.464 TEST_HEADER include/spdk/env.h 00:07:52.464 TEST_HEADER include/spdk/fd.h 00:07:52.464 TEST_HEADER include/spdk/fd_group.h 00:07:52.464 TEST_HEADER include/spdk/fsdev_module.h 00:07:52.464 TEST_HEADER include/spdk/file.h 00:07:52.464 TEST_HEADER include/spdk/fsdev.h 00:07:52.464 TEST_HEADER include/spdk/ftl.h 00:07:52.464 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:52.464 CC app/nvmf_tgt/nvmf_main.o 00:07:52.464 TEST_HEADER include/spdk/gpt_spec.h 00:07:52.464 TEST_HEADER include/spdk/hexlify.h 00:07:52.464 CC app/spdk_dd/spdk_dd.o 00:07:52.464 CC app/iscsi_tgt/iscsi_tgt.o 00:07:52.464 TEST_HEADER include/spdk/histogram_data.h 00:07:52.464 TEST_HEADER include/spdk/idxd_spec.h 00:07:52.464 TEST_HEADER include/spdk/init.h 00:07:52.464 TEST_HEADER include/spdk/ioat.h 00:07:52.464 TEST_HEADER include/spdk/idxd.h 00:07:52.464 TEST_HEADER include/spdk/ioat_spec.h 00:07:52.464 TEST_HEADER include/spdk/iscsi_spec.h 00:07:52.464 TEST_HEADER include/spdk/json.h 00:07:52.464 TEST_HEADER include/spdk/keyring.h 00:07:52.464 TEST_HEADER include/spdk/likely.h 00:07:52.464 TEST_HEADER include/spdk/keyring_module.h 00:07:52.464 TEST_HEADER include/spdk/jsonrpc.h 00:07:52.464 TEST_HEADER include/spdk/log.h 00:07:52.464 TEST_HEADER include/spdk/lvol.h 00:07:52.464 TEST_HEADER include/spdk/memory.h 00:07:52.464 TEST_HEADER include/spdk/md5.h 00:07:52.464 TEST_HEADER include/spdk/mmio.h 00:07:52.464 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:52.464 TEST_HEADER include/spdk/nbd.h 00:07:52.464 CC app/spdk_tgt/spdk_tgt.o 00:07:52.464 TEST_HEADER include/spdk/net.h 00:07:52.464 TEST_HEADER include/spdk/notify.h 00:07:52.464 TEST_HEADER include/spdk/nvme.h 00:07:52.464 TEST_HEADER include/spdk/nvme_intel.h 00:07:52.464 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:52.464 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:52.464 TEST_HEADER include/spdk/nvme_spec.h 00:07:52.464 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:52.464 TEST_HEADER include/spdk/nvme_zns.h 00:07:52.464 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:52.464 TEST_HEADER include/spdk/nvmf_spec.h 00:07:52.464 TEST_HEADER include/spdk/nvmf.h 00:07:52.464 TEST_HEADER include/spdk/opal.h 00:07:52.464 TEST_HEADER include/spdk/nvmf_transport.h 00:07:52.464 TEST_HEADER include/spdk/pci_ids.h 00:07:52.464 TEST_HEADER include/spdk/opal_spec.h 00:07:52.464 TEST_HEADER include/spdk/reduce.h 00:07:52.464 TEST_HEADER include/spdk/pipe.h 00:07:52.464 TEST_HEADER include/spdk/rpc.h 00:07:52.464 TEST_HEADER include/spdk/queue.h 00:07:52.464 TEST_HEADER include/spdk/scsi.h 00:07:52.464 TEST_HEADER include/spdk/scheduler.h 00:07:52.465 TEST_HEADER include/spdk/sock.h 00:07:52.465 TEST_HEADER include/spdk/scsi_spec.h 00:07:52.465 TEST_HEADER include/spdk/stdinc.h 00:07:52.465 TEST_HEADER include/spdk/string.h 00:07:52.465 TEST_HEADER include/spdk/thread.h 00:07:52.465 TEST_HEADER include/spdk/trace.h 00:07:52.465 TEST_HEADER include/spdk/ublk.h 00:07:52.465 TEST_HEADER include/spdk/trace_parser.h 00:07:52.465 TEST_HEADER include/spdk/util.h 00:07:52.465 TEST_HEADER include/spdk/tree.h 00:07:52.465 TEST_HEADER include/spdk/version.h 00:07:52.465 TEST_HEADER include/spdk/uuid.h 00:07:52.465 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:52.465 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:52.465 TEST_HEADER include/spdk/vhost.h 00:07:52.465 TEST_HEADER include/spdk/xor.h 00:07:52.465 TEST_HEADER include/spdk/vmd.h 00:07:52.465 TEST_HEADER include/spdk/zipf.h 00:07:52.465 CXX test/cpp_headers/accel.o 00:07:52.465 CXX test/cpp_headers/accel_module.o 00:07:52.465 CXX test/cpp_headers/barrier.o 00:07:52.465 CXX test/cpp_headers/base64.o 00:07:52.465 CXX test/cpp_headers/assert.o 00:07:52.465 CXX test/cpp_headers/bdev.o 00:07:52.465 CXX test/cpp_headers/bdev_zone.o 00:07:52.465 CXX test/cpp_headers/bit_array.o 00:07:52.465 CXX test/cpp_headers/bdev_module.o 00:07:52.465 CXX test/cpp_headers/bit_pool.o 00:07:52.465 CXX test/cpp_headers/blob_bdev.o 00:07:52.465 CXX test/cpp_headers/blobfs.o 00:07:52.465 CXX test/cpp_headers/blob.o 00:07:52.465 CXX test/cpp_headers/blobfs_bdev.o 00:07:52.465 CXX test/cpp_headers/config.o 00:07:52.465 CXX test/cpp_headers/conf.o 00:07:52.465 CXX test/cpp_headers/crc16.o 00:07:52.465 CXX test/cpp_headers/cpuset.o 00:07:52.465 CXX test/cpp_headers/crc32.o 00:07:52.465 CXX test/cpp_headers/crc64.o 00:07:52.465 CXX test/cpp_headers/dma.o 00:07:52.465 CXX test/cpp_headers/dif.o 00:07:52.465 CXX test/cpp_headers/endian.o 00:07:52.465 CXX test/cpp_headers/env.o 00:07:52.465 CXX test/cpp_headers/fd_group.o 00:07:52.465 CXX test/cpp_headers/event.o 00:07:52.465 CXX test/cpp_headers/env_dpdk.o 00:07:52.465 CXX test/cpp_headers/fd.o 00:07:52.465 CXX test/cpp_headers/fsdev.o 00:07:52.465 CXX test/cpp_headers/file.o 00:07:52.465 CXX test/cpp_headers/fsdev_module.o 00:07:52.465 CXX test/cpp_headers/fuse_dispatcher.o 00:07:52.465 CXX test/cpp_headers/ftl.o 00:07:52.465 CXX test/cpp_headers/gpt_spec.o 00:07:52.465 CXX test/cpp_headers/hexlify.o 00:07:52.465 CXX test/cpp_headers/histogram_data.o 00:07:52.465 CXX test/cpp_headers/idxd.o 00:07:52.465 CXX test/cpp_headers/idxd_spec.o 00:07:52.465 CXX test/cpp_headers/init.o 00:07:52.465 CXX test/cpp_headers/iscsi_spec.o 00:07:52.465 CXX test/cpp_headers/ioat.o 00:07:52.465 CXX test/cpp_headers/jsonrpc.o 00:07:52.465 CXX test/cpp_headers/json.o 00:07:52.465 CXX test/cpp_headers/ioat_spec.o 00:07:52.465 CXX test/cpp_headers/keyring_module.o 00:07:52.465 CXX test/cpp_headers/likely.o 00:07:52.465 CXX test/cpp_headers/keyring.o 00:07:52.465 CXX test/cpp_headers/lvol.o 00:07:52.465 CXX test/cpp_headers/log.o 00:07:52.465 CXX test/cpp_headers/md5.o 00:07:52.465 CXX test/cpp_headers/memory.o 00:07:52.465 CXX test/cpp_headers/mmio.o 00:07:52.465 CXX test/cpp_headers/nvme.o 00:07:52.465 CXX test/cpp_headers/notify.o 00:07:52.465 CXX test/cpp_headers/nbd.o 00:07:52.465 CXX test/cpp_headers/net.o 00:07:52.465 CXX test/cpp_headers/nvme_ocssd.o 00:07:52.465 CXX test/cpp_headers/nvme_intel.o 00:07:52.465 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:52.465 CXX test/cpp_headers/nvme_zns.o 00:07:52.465 CXX test/cpp_headers/nvme_spec.o 00:07:52.465 CC examples/util/zipf/zipf.o 00:07:52.465 CXX test/cpp_headers/nvmf_cmd.o 00:07:52.465 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:52.465 CXX test/cpp_headers/nvmf.o 00:07:52.465 CXX test/cpp_headers/nvmf_spec.o 00:07:52.465 CXX test/cpp_headers/nvmf_transport.o 00:07:52.465 CXX test/cpp_headers/opal.o 00:07:52.465 CC test/app/jsoncat/jsoncat.o 00:07:52.465 CC test/app/histogram_perf/histogram_perf.o 00:07:52.465 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:52.465 CC test/env/vtophys/vtophys.o 00:07:52.465 CC test/env/memory/memory_ut.o 00:07:52.465 CC app/fio/nvme/fio_plugin.o 00:07:52.465 CC test/app/stub/stub.o 00:07:52.465 CC test/dma/test_dma/test_dma.o 00:07:52.465 CC examples/ioat/verify/verify.o 00:07:52.465 CXX test/cpp_headers/opal_spec.o 00:07:52.465 CC test/env/pci/pci_ut.o 00:07:52.465 CC test/app/bdev_svc/bdev_svc.o 00:07:52.465 CC test/thread/poller_perf/poller_perf.o 00:07:52.465 CC examples/ioat/perf/perf.o 00:07:52.729 CC app/fio/bdev/fio_plugin.o 00:07:52.729 LINK spdk_lspci 00:07:52.729 LINK rpc_client_test 00:07:52.729 LINK spdk_nvme_discover 00:07:52.991 LINK spdk_trace_record 00:07:52.991 LINK nvmf_tgt 00:07:52.991 CC test/env/mem_callbacks/mem_callbacks.o 00:07:52.991 LINK zipf 00:07:52.991 LINK interrupt_tgt 00:07:52.992 LINK histogram_perf 00:07:52.992 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:52.992 LINK jsoncat 00:07:52.992 LINK vtophys 00:07:52.992 LINK iscsi_tgt 00:07:52.992 CXX test/cpp_headers/pipe.o 00:07:52.992 CXX test/cpp_headers/pci_ids.o 00:07:52.992 CXX test/cpp_headers/queue.o 00:07:52.992 CXX test/cpp_headers/reduce.o 00:07:52.992 CXX test/cpp_headers/rpc.o 00:07:52.992 CXX test/cpp_headers/scheduler.o 00:07:52.992 CXX test/cpp_headers/scsi.o 00:07:52.992 CXX test/cpp_headers/scsi_spec.o 00:07:52.992 CXX test/cpp_headers/sock.o 00:07:52.992 LINK stub 00:07:52.992 CXX test/cpp_headers/string.o 00:07:52.992 CXX test/cpp_headers/stdinc.o 00:07:52.992 CXX test/cpp_headers/thread.o 00:07:52.992 CXX test/cpp_headers/trace.o 00:07:52.992 CXX test/cpp_headers/tree.o 00:07:52.992 CXX test/cpp_headers/trace_parser.o 00:07:52.992 CXX test/cpp_headers/ublk.o 00:07:52.992 LINK spdk_tgt 00:07:52.992 CXX test/cpp_headers/util.o 00:07:52.992 CXX test/cpp_headers/uuid.o 00:07:52.992 CXX test/cpp_headers/vfio_user_pci.o 00:07:52.992 CXX test/cpp_headers/version.o 00:07:52.992 CXX test/cpp_headers/vfio_user_spec.o 00:07:52.992 CXX test/cpp_headers/vmd.o 00:07:52.992 CXX test/cpp_headers/vhost.o 00:07:52.992 CXX test/cpp_headers/xor.o 00:07:52.992 CXX test/cpp_headers/zipf.o 00:07:52.992 LINK poller_perf 00:07:53.249 LINK env_dpdk_post_init 00:07:53.249 LINK bdev_svc 00:07:53.249 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:53.249 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:53.249 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:53.249 LINK verify 00:07:53.249 LINK ioat_perf 00:07:53.249 LINK spdk_dd 00:07:53.249 LINK spdk_trace 00:07:53.508 LINK pci_ut 00:07:53.508 LINK test_dma 00:07:53.508 LINK spdk_nvme_identify 00:07:53.508 CC examples/idxd/perf/perf.o 00:07:53.508 CC examples/vmd/led/led.o 00:07:53.508 CC examples/sock/hello_world/hello_sock.o 00:07:53.508 LINK spdk_bdev 00:07:53.508 CC examples/vmd/lsvmd/lsvmd.o 00:07:53.508 CC examples/thread/thread/thread_ex.o 00:07:53.508 CC test/event/event_perf/event_perf.o 00:07:53.508 CC test/event/reactor/reactor.o 00:07:53.508 CC test/event/reactor_perf/reactor_perf.o 00:07:53.508 CC test/event/app_repeat/app_repeat.o 00:07:53.508 LINK nvme_fuzz 00:07:53.508 LINK vhost_fuzz 00:07:53.508 LINK spdk_nvme 00:07:53.765 CC test/event/scheduler/scheduler.o 00:07:53.765 LINK led 00:07:53.765 LINK spdk_nvme_perf 00:07:53.765 LINK lsvmd 00:07:53.765 CC app/vhost/vhost.o 00:07:53.765 LINK spdk_top 00:07:53.765 LINK mem_callbacks 00:07:53.765 LINK hello_sock 00:07:53.765 LINK event_perf 00:07:53.765 LINK reactor_perf 00:07:53.765 LINK reactor 00:07:53.765 LINK app_repeat 00:07:53.765 LINK thread 00:07:53.765 LINK idxd_perf 00:07:54.023 LINK scheduler 00:07:54.023 LINK vhost 00:07:54.023 CC test/nvme/boot_partition/boot_partition.o 00:07:54.023 CC test/nvme/sgl/sgl.o 00:07:54.023 CC test/nvme/connect_stress/connect_stress.o 00:07:54.023 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:54.023 CC test/nvme/e2edp/nvme_dp.o 00:07:54.023 CC test/nvme/aer/aer.o 00:07:54.023 CC test/nvme/compliance/nvme_compliance.o 00:07:54.023 CC test/nvme/startup/startup.o 00:07:54.023 CC test/nvme/cuse/cuse.o 00:07:54.023 CC test/nvme/simple_copy/simple_copy.o 00:07:54.023 CC test/nvme/reset/reset.o 00:07:54.023 CC test/nvme/fdp/fdp.o 00:07:54.023 CC test/nvme/reserve/reserve.o 00:07:54.023 CC test/nvme/err_injection/err_injection.o 00:07:54.023 CC test/nvme/overhead/overhead.o 00:07:54.023 CC test/nvme/fused_ordering/fused_ordering.o 00:07:54.023 CC test/blobfs/mkfs/mkfs.o 00:07:54.023 CC test/accel/dif/dif.o 00:07:54.023 LINK memory_ut 00:07:54.023 LINK boot_partition 00:07:54.023 CC test/lvol/esnap/esnap.o 00:07:54.023 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:54.282 CC examples/nvme/hello_world/hello_world.o 00:07:54.282 LINK doorbell_aers 00:07:54.282 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:54.282 CC examples/nvme/abort/abort.o 00:07:54.282 CC examples/nvme/reconnect/reconnect.o 00:07:54.282 LINK connect_stress 00:07:54.282 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:54.282 CC examples/nvme/hotplug/hotplug.o 00:07:54.282 CC examples/nvme/arbitration/arbitration.o 00:07:54.282 LINK err_injection 00:07:54.282 LINK reserve 00:07:54.282 LINK startup 00:07:54.282 LINK fused_ordering 00:07:54.282 LINK mkfs 00:07:54.282 LINK sgl 00:07:54.282 LINK simple_copy 00:07:54.282 LINK overhead 00:07:54.282 LINK nvme_dp 00:07:54.282 LINK reset 00:07:54.282 LINK aer 00:07:54.282 CC examples/accel/perf/accel_perf.o 00:07:54.282 LINK fdp 00:07:54.282 LINK nvme_compliance 00:07:54.282 CC examples/blob/cli/blobcli.o 00:07:54.282 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:54.282 CC examples/blob/hello_world/hello_blob.o 00:07:54.282 LINK cmb_copy 00:07:54.282 LINK pmr_persistence 00:07:54.282 LINK hello_world 00:07:54.540 LINK hotplug 00:07:54.540 LINK abort 00:07:54.540 LINK reconnect 00:07:54.540 LINK arbitration 00:07:54.540 LINK hello_blob 00:07:54.540 LINK nvme_manage 00:07:54.540 LINK hello_fsdev 00:07:54.540 LINK dif 00:07:54.540 LINK iscsi_fuzz 00:07:54.798 LINK accel_perf 00:07:54.798 LINK blobcli 00:07:55.058 LINK cuse 00:07:55.058 CC test/bdev/bdevio/bdevio.o 00:07:55.058 CC examples/bdev/hello_world/hello_bdev.o 00:07:55.317 CC examples/bdev/bdevperf/bdevperf.o 00:07:55.317 LINK hello_bdev 00:07:55.576 LINK bdevio 00:07:55.834 LINK bdevperf 00:07:56.401 CC examples/nvmf/nvmf/nvmf.o 00:07:56.660 LINK nvmf 00:07:58.038 LINK esnap 00:07:58.038 00:07:58.038 real 0m56.017s 00:07:58.038 user 8m2.716s 00:07:58.038 sys 3m40.949s 00:07:58.038 14:27:09 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:58.038 14:27:09 make -- common/autotest_common.sh@10 -- $ set +x 00:07:58.038 ************************************ 00:07:58.038 END TEST make 00:07:58.038 ************************************ 00:07:58.038 14:27:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:58.038 14:27:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:58.038 14:27:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:58.038 14:27:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:58.038 14:27:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:58.038 14:27:09 -- pm/common@44 -- $ pid=1318610 00:07:58.038 14:27:09 -- pm/common@50 -- $ kill -TERM 1318610 00:07:58.038 14:27:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:58.038 14:27:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:58.038 14:27:09 -- pm/common@44 -- $ pid=1318611 00:07:58.038 14:27:09 -- pm/common@50 -- $ kill -TERM 1318611 00:07:58.038 14:27:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:58.038 14:27:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:58.038 14:27:09 -- pm/common@44 -- $ pid=1318613 00:07:58.038 14:27:09 -- pm/common@50 -- $ kill -TERM 1318613 00:07:58.038 14:27:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:58.038 14:27:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:58.038 14:27:09 -- pm/common@44 -- $ pid=1318637 00:07:58.038 14:27:09 -- pm/common@50 -- $ sudo -E kill -TERM 1318637 00:07:58.038 14:27:09 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:58.038 14:27:09 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:58.297 14:27:10 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:58.297 14:27:10 -- common/autotest_common.sh@1693 -- # lcov --version 00:07:58.297 14:27:10 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:58.297 14:27:10 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:58.297 14:27:10 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.297 14:27:10 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.297 14:27:10 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.297 14:27:10 -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.297 14:27:10 -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.297 14:27:10 -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.297 14:27:10 -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.297 14:27:10 -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.297 14:27:10 -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.297 14:27:10 -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.297 14:27:10 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.297 14:27:10 -- scripts/common.sh@344 -- # case "$op" in 00:07:58.297 14:27:10 -- scripts/common.sh@345 -- # : 1 00:07:58.297 14:27:10 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.297 14:27:10 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.297 14:27:10 -- scripts/common.sh@365 -- # decimal 1 00:07:58.297 14:27:10 -- scripts/common.sh@353 -- # local d=1 00:07:58.297 14:27:10 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.297 14:27:10 -- scripts/common.sh@355 -- # echo 1 00:07:58.297 14:27:10 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.297 14:27:10 -- scripts/common.sh@366 -- # decimal 2 00:07:58.297 14:27:10 -- scripts/common.sh@353 -- # local d=2 00:07:58.298 14:27:10 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.298 14:27:10 -- scripts/common.sh@355 -- # echo 2 00:07:58.298 14:27:10 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.298 14:27:10 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.298 14:27:10 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.298 14:27:10 -- scripts/common.sh@368 -- # return 0 00:07:58.298 14:27:10 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.298 14:27:10 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:58.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.298 --rc genhtml_branch_coverage=1 00:07:58.298 --rc genhtml_function_coverage=1 00:07:58.298 --rc genhtml_legend=1 00:07:58.298 --rc geninfo_all_blocks=1 00:07:58.298 --rc geninfo_unexecuted_blocks=1 00:07:58.298 00:07:58.298 ' 00:07:58.298 14:27:10 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:58.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.298 --rc genhtml_branch_coverage=1 00:07:58.298 --rc genhtml_function_coverage=1 00:07:58.298 --rc genhtml_legend=1 00:07:58.298 --rc geninfo_all_blocks=1 00:07:58.298 --rc geninfo_unexecuted_blocks=1 00:07:58.298 00:07:58.298 ' 00:07:58.298 14:27:10 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:58.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.298 --rc genhtml_branch_coverage=1 00:07:58.298 --rc genhtml_function_coverage=1 00:07:58.298 --rc genhtml_legend=1 00:07:58.298 --rc geninfo_all_blocks=1 00:07:58.298 --rc geninfo_unexecuted_blocks=1 00:07:58.298 00:07:58.298 ' 00:07:58.298 14:27:10 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:58.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.298 --rc genhtml_branch_coverage=1 00:07:58.298 --rc genhtml_function_coverage=1 00:07:58.298 --rc genhtml_legend=1 00:07:58.298 --rc geninfo_all_blocks=1 00:07:58.298 --rc geninfo_unexecuted_blocks=1 00:07:58.298 00:07:58.298 ' 00:07:58.298 14:27:10 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.298 14:27:10 -- nvmf/common.sh@7 -- # uname -s 00:07:58.298 14:27:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.298 14:27:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.298 14:27:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.298 14:27:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.298 14:27:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.298 14:27:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.298 14:27:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.298 14:27:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.298 14:27:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.298 14:27:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.298 14:27:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:58.298 14:27:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:58.298 14:27:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.298 14:27:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.298 14:27:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.298 14:27:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.298 14:27:10 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.298 14:27:10 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.298 14:27:10 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.298 14:27:10 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.298 14:27:10 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.298 14:27:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.298 14:27:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.298 14:27:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.298 14:27:10 -- paths/export.sh@5 -- # export PATH 00:07:58.298 14:27:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.298 14:27:10 -- nvmf/common.sh@51 -- # : 0 00:07:58.298 14:27:10 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:58.298 14:27:10 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:58.298 14:27:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.298 14:27:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.298 14:27:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.298 14:27:10 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:58.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:58.298 14:27:10 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:58.298 14:27:10 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:58.298 14:27:10 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:58.298 14:27:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:58.298 14:27:10 -- spdk/autotest.sh@32 -- # uname -s 00:07:58.298 14:27:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:58.298 14:27:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:58.298 14:27:10 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:58.298 14:27:10 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:07:58.298 14:27:10 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:58.298 14:27:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:58.298 14:27:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:58.298 14:27:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:58.298 14:27:10 -- spdk/autotest.sh@48 -- # udevadm_pid=1381521 00:07:58.298 14:27:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:58.298 14:27:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:58.298 14:27:10 -- pm/common@17 -- # local monitor 00:07:58.298 14:27:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:58.298 14:27:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:58.298 14:27:10 -- pm/common@21 -- # date +%s 00:07:58.298 14:27:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:58.298 14:27:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:58.298 14:27:10 -- pm/common@21 -- # date +%s 00:07:58.298 14:27:10 -- pm/common@25 -- # sleep 1 00:07:58.298 14:27:10 -- pm/common@21 -- # date +%s 00:07:58.298 14:27:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732109230 00:07:58.298 14:27:10 -- pm/common@21 -- # date +%s 00:07:58.298 14:27:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732109230 00:07:58.298 14:27:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732109230 00:07:58.298 14:27:10 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732109230 00:07:58.298 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732109230_collect-cpu-load.pm.log 00:07:58.298 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732109230_collect-vmstat.pm.log 00:07:58.298 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732109230_collect-cpu-temp.pm.log 00:07:58.298 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732109230_collect-bmc-pm.bmc.pm.log 00:07:59.235 14:27:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:59.235 14:27:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:59.235 14:27:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:59.235 14:27:11 -- common/autotest_common.sh@10 -- # set +x 00:07:59.235 14:27:11 -- spdk/autotest.sh@59 -- # create_test_list 00:07:59.235 14:27:11 -- common/autotest_common.sh@752 -- # xtrace_disable 00:07:59.235 14:27:11 -- common/autotest_common.sh@10 -- # set +x 00:07:59.595 14:27:11 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:07:59.595 14:27:11 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:59.595 14:27:11 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:59.595 14:27:11 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:07:59.595 14:27:11 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:59.595 14:27:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:59.595 14:27:11 -- common/autotest_common.sh@1457 -- # uname 00:07:59.595 14:27:11 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:59.595 14:27:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:59.595 14:27:11 -- common/autotest_common.sh@1477 -- # uname 00:07:59.595 14:27:11 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:59.595 14:27:11 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:59.595 14:27:11 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:59.595 lcov: LCOV version 1.15 00:07:59.595 14:27:11 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:08:21.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:21.534 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:08:24.850 14:27:36 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:24.850 14:27:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:24.850 14:27:36 -- common/autotest_common.sh@10 -- # set +x 00:08:24.850 14:27:36 -- spdk/autotest.sh@78 -- # rm -f 00:08:24.850 14:27:36 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:27.386 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:08:27.644 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:08:27.644 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:08:27.644 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:08:27.644 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:08:27.645 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:08:27.645 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:08:27.645 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:08:27.645 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:08:27.645 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:08:27.645 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:08:27.645 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:08:27.645 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:08:27.902 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:08:27.902 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:08:27.902 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:08:27.902 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:08:27.902 14:27:39 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:27.902 14:27:39 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:27.903 14:27:39 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:27.903 14:27:39 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:08:27.903 14:27:39 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:27.903 14:27:39 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:08:27.903 14:27:39 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:27.903 14:27:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:27.903 14:27:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:27.903 14:27:39 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:27.903 14:27:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:27.903 14:27:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:27.903 14:27:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:27.903 14:27:39 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:27.903 14:27:39 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:27.903 No valid GPT data, bailing 00:08:27.903 14:27:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:27.903 14:27:39 -- scripts/common.sh@394 -- # pt= 00:08:27.903 14:27:39 -- scripts/common.sh@395 -- # return 1 00:08:27.903 14:27:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:27.903 1+0 records in 00:08:27.903 1+0 records out 00:08:27.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00446358 s, 235 MB/s 00:08:27.903 14:27:39 -- spdk/autotest.sh@105 -- # sync 00:08:27.903 14:27:39 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:27.903 14:27:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:27.903 14:27:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:34.599 14:27:45 -- spdk/autotest.sh@111 -- # uname -s 00:08:34.599 14:27:45 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:34.599 14:27:45 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:34.599 14:27:45 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:08:36.504 Hugepages 00:08:36.504 node hugesize free / total 00:08:36.504 node0 1048576kB 0 / 0 00:08:36.504 node0 2048kB 0 / 0 00:08:36.504 node1 1048576kB 0 / 0 00:08:36.504 node1 2048kB 0 / 0 00:08:36.504 00:08:36.504 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:36.504 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:08:36.504 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:08:36.504 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:08:36.505 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:08:36.505 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:08:36.505 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:08:36.505 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:08:36.505 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:08:36.505 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:08:36.505 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:08:36.505 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:08:36.505 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:08:36.505 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:08:36.505 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:08:36.505 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:08:36.505 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:08:36.505 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:08:36.505 14:27:48 -- spdk/autotest.sh@117 -- # uname -s 00:08:36.505 14:27:48 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:36.505 14:27:48 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:36.505 14:27:48 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:39.798 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:39.798 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:39.798 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:39.798 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:39.798 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:39.798 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:39.798 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:39.798 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:39.798 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:39.798 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:39.798 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:39.798 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:39.798 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:39.798 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:39.798 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:39.798 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:40.366 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:08:40.366 14:27:52 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:41.305 14:27:53 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:41.305 14:27:53 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:41.305 14:27:53 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:41.305 14:27:53 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:41.305 14:27:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:41.305 14:27:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:41.305 14:27:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:41.305 14:27:53 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:41.305 14:27:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:41.305 14:27:53 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:41.305 14:27:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:08:41.305 14:27:53 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:44.596 Waiting for block devices as requested 00:08:44.596 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:08:44.596 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:44.596 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:44.596 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:44.596 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:44.596 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:44.596 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:44.855 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:44.855 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:44.855 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:45.115 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:45.115 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:45.115 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:45.375 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:45.375 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:45.375 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:45.375 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:45.635 14:27:57 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:45.635 14:27:57 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:08:45.635 14:27:57 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:08:45.635 14:27:57 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:08:45.635 14:27:57 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:08:45.635 14:27:57 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:08:45.635 14:27:57 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:08:45.635 14:27:57 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:45.635 14:27:57 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:45.635 14:27:57 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:45.635 14:27:57 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:45.635 14:27:57 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:45.635 14:27:57 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:45.635 14:27:57 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:08:45.635 14:27:57 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:45.635 14:27:57 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:45.635 14:27:57 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:45.635 14:27:57 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:45.635 14:27:57 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:45.635 14:27:57 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:45.635 14:27:57 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:45.635 14:27:57 -- common/autotest_common.sh@1543 -- # continue 00:08:45.635 14:27:57 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:45.635 14:27:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.635 14:27:57 -- common/autotest_common.sh@10 -- # set +x 00:08:45.635 14:27:57 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:45.635 14:27:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.635 14:27:57 -- common/autotest_common.sh@10 -- # set +x 00:08:45.635 14:27:57 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:48.926 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:48.926 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:48.926 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:48.926 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:48.926 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:48.926 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:48.926 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:48.926 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:48.926 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:48.926 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:48.926 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:48.926 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:48.926 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:48.926 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:48.926 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:48.926 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:49.493 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:08:49.493 14:28:01 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:49.493 14:28:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:49.493 14:28:01 -- common/autotest_common.sh@10 -- # set +x 00:08:49.493 14:28:01 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:49.493 14:28:01 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:49.493 14:28:01 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:49.493 14:28:01 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:49.493 14:28:01 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:49.493 14:28:01 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:49.493 14:28:01 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:49.493 14:28:01 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:49.493 14:28:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:49.493 14:28:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:49.493 14:28:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:49.493 14:28:01 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:49.493 14:28:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:49.753 14:28:01 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:49.753 14:28:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:08:49.753 14:28:01 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:49.753 14:28:01 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:08:49.753 14:28:01 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:08:49.753 14:28:01 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:08:49.753 14:28:01 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:08:49.753 14:28:01 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:08:49.753 14:28:01 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:08:49.753 14:28:01 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:08:49.753 14:28:01 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1396019 00:08:49.753 14:28:01 -- common/autotest_common.sh@1585 -- # waitforlisten 1396019 00:08:49.753 14:28:01 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:49.753 14:28:01 -- common/autotest_common.sh@835 -- # '[' -z 1396019 ']' 00:08:49.753 14:28:01 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.753 14:28:01 -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.753 14:28:01 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.753 14:28:01 -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.753 14:28:01 -- common/autotest_common.sh@10 -- # set +x 00:08:49.753 [2024-11-20 14:28:01.523006] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:08:49.753 [2024-11-20 14:28:01.523064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1396019 ] 00:08:49.753 [2024-11-20 14:28:01.598367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.753 [2024-11-20 14:28:01.641436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.013 14:28:01 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.013 14:28:01 -- common/autotest_common.sh@868 -- # return 0 00:08:50.013 14:28:01 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:08:50.013 14:28:01 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:08:50.013 14:28:01 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:08:53.299 nvme0n1 00:08:53.299 14:28:04 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:08:53.300 [2024-11-20 14:28:05.050023] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:08:53.300 request: 00:08:53.300 { 00:08:53.300 "nvme_ctrlr_name": "nvme0", 00:08:53.300 "password": "test", 00:08:53.300 "method": "bdev_nvme_opal_revert", 00:08:53.300 "req_id": 1 00:08:53.300 } 00:08:53.300 Got JSON-RPC error response 00:08:53.300 response: 00:08:53.300 { 00:08:53.300 "code": -32602, 00:08:53.300 "message": "Invalid parameters" 00:08:53.300 } 00:08:53.300 14:28:05 -- common/autotest_common.sh@1591 -- # true 00:08:53.300 14:28:05 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:08:53.300 14:28:05 -- common/autotest_common.sh@1595 -- # killprocess 1396019 00:08:53.300 14:28:05 -- common/autotest_common.sh@954 -- # '[' -z 1396019 ']' 00:08:53.300 14:28:05 -- common/autotest_common.sh@958 -- # kill -0 1396019 00:08:53.300 14:28:05 -- common/autotest_common.sh@959 -- # uname 00:08:53.300 14:28:05 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.300 14:28:05 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1396019 00:08:53.300 14:28:05 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:53.300 14:28:05 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:53.300 14:28:05 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1396019' 00:08:53.300 killing process with pid 1396019 00:08:53.300 14:28:05 -- common/autotest_common.sh@973 -- # kill 1396019 00:08:53.300 14:28:05 -- common/autotest_common.sh@978 -- # wait 1396019 00:08:55.205 14:28:06 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:55.205 14:28:06 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:55.205 14:28:06 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:55.205 14:28:06 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:55.205 14:28:06 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:55.205 14:28:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:55.205 14:28:06 -- common/autotest_common.sh@10 -- # set +x 00:08:55.205 14:28:06 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:55.205 14:28:06 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:55.205 14:28:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.205 14:28:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.205 14:28:06 -- common/autotest_common.sh@10 -- # set +x 00:08:55.205 ************************************ 00:08:55.205 START TEST env 00:08:55.205 ************************************ 00:08:55.205 14:28:06 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:55.205 * Looking for test storage... 00:08:55.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:08:55.205 14:28:06 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:55.205 14:28:06 env -- common/autotest_common.sh@1693 -- # lcov --version 00:08:55.205 14:28:06 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:55.205 14:28:06 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:55.205 14:28:06 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.205 14:28:06 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.205 14:28:06 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.205 14:28:06 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.205 14:28:06 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.205 14:28:06 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.205 14:28:06 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.205 14:28:06 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.205 14:28:06 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.205 14:28:06 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.205 14:28:06 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.205 14:28:06 env -- scripts/common.sh@344 -- # case "$op" in 00:08:55.205 14:28:06 env -- scripts/common.sh@345 -- # : 1 00:08:55.205 14:28:06 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.205 14:28:06 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.205 14:28:06 env -- scripts/common.sh@365 -- # decimal 1 00:08:55.205 14:28:06 env -- scripts/common.sh@353 -- # local d=1 00:08:55.205 14:28:06 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.205 14:28:06 env -- scripts/common.sh@355 -- # echo 1 00:08:55.205 14:28:06 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.205 14:28:06 env -- scripts/common.sh@366 -- # decimal 2 00:08:55.205 14:28:06 env -- scripts/common.sh@353 -- # local d=2 00:08:55.205 14:28:06 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.205 14:28:06 env -- scripts/common.sh@355 -- # echo 2 00:08:55.205 14:28:06 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.205 14:28:06 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.205 14:28:06 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.205 14:28:06 env -- scripts/common.sh@368 -- # return 0 00:08:55.205 14:28:06 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.205 14:28:06 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:55.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.205 --rc genhtml_branch_coverage=1 00:08:55.205 --rc genhtml_function_coverage=1 00:08:55.205 --rc genhtml_legend=1 00:08:55.205 --rc geninfo_all_blocks=1 00:08:55.205 --rc geninfo_unexecuted_blocks=1 00:08:55.205 00:08:55.205 ' 00:08:55.205 14:28:06 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:55.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.205 --rc genhtml_branch_coverage=1 00:08:55.205 --rc genhtml_function_coverage=1 00:08:55.205 --rc genhtml_legend=1 00:08:55.205 --rc geninfo_all_blocks=1 00:08:55.205 --rc geninfo_unexecuted_blocks=1 00:08:55.205 00:08:55.205 ' 00:08:55.205 14:28:06 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:55.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.205 --rc genhtml_branch_coverage=1 00:08:55.205 --rc genhtml_function_coverage=1 00:08:55.205 --rc genhtml_legend=1 00:08:55.205 --rc geninfo_all_blocks=1 00:08:55.205 --rc geninfo_unexecuted_blocks=1 00:08:55.205 00:08:55.205 ' 00:08:55.205 14:28:06 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:55.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.205 --rc genhtml_branch_coverage=1 00:08:55.205 --rc genhtml_function_coverage=1 00:08:55.205 --rc genhtml_legend=1 00:08:55.205 --rc geninfo_all_blocks=1 00:08:55.205 --rc geninfo_unexecuted_blocks=1 00:08:55.205 00:08:55.205 ' 00:08:55.205 14:28:06 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:55.205 14:28:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.205 14:28:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.205 14:28:06 env -- common/autotest_common.sh@10 -- # set +x 00:08:55.205 ************************************ 00:08:55.205 START TEST env_memory 00:08:55.205 ************************************ 00:08:55.206 14:28:06 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:55.206 00:08:55.206 00:08:55.206 CUnit - A unit testing framework for C - Version 2.1-3 00:08:55.206 http://cunit.sourceforge.net/ 00:08:55.206 00:08:55.206 00:08:55.206 Suite: memory 00:08:55.206 Test: alloc and free memory map ...[2024-11-20 14:28:07.012259] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:55.206 passed 00:08:55.206 Test: mem map translation ...[2024-11-20 14:28:07.031348] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:55.206 [2024-11-20 14:28:07.031361] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:55.206 [2024-11-20 14:28:07.031410] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:55.206 [2024-11-20 14:28:07.031417] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:55.206 passed 00:08:55.206 Test: mem map registration ...[2024-11-20 14:28:07.069138] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:55.206 [2024-11-20 14:28:07.069152] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:55.206 passed 00:08:55.206 Test: mem map adjacent registrations ...passed 00:08:55.206 00:08:55.206 Run Summary: Type Total Ran Passed Failed Inactive 00:08:55.206 suites 1 1 n/a 0 0 00:08:55.206 tests 4 4 4 0 0 00:08:55.206 asserts 152 152 152 0 n/a 00:08:55.206 00:08:55.206 Elapsed time = 0.140 seconds 00:08:55.206 00:08:55.206 real 0m0.153s 00:08:55.206 user 0m0.147s 00:08:55.206 sys 0m0.006s 00:08:55.206 14:28:07 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.206 14:28:07 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:55.206 ************************************ 00:08:55.206 END TEST env_memory 00:08:55.206 ************************************ 00:08:55.206 14:28:07 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:55.206 14:28:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.206 14:28:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.206 14:28:07 env -- common/autotest_common.sh@10 -- # set +x 00:08:55.465 ************************************ 00:08:55.465 START TEST env_vtophys 00:08:55.465 ************************************ 00:08:55.465 14:28:07 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:55.465 EAL: lib.eal log level changed from notice to debug 00:08:55.465 EAL: Detected lcore 0 as core 0 on socket 0 00:08:55.465 EAL: Detected lcore 1 as core 1 on socket 0 00:08:55.465 EAL: Detected lcore 2 as core 2 on socket 0 00:08:55.465 EAL: Detected lcore 3 as core 3 on socket 0 00:08:55.465 EAL: Detected lcore 4 as core 4 on socket 0 00:08:55.465 EAL: Detected lcore 5 as core 5 on socket 0 00:08:55.465 EAL: Detected lcore 6 as core 6 on socket 0 00:08:55.465 EAL: Detected lcore 7 as core 8 on socket 0 00:08:55.465 EAL: Detected lcore 8 as core 9 on socket 0 00:08:55.465 EAL: Detected lcore 9 as core 10 on socket 0 00:08:55.465 EAL: Detected lcore 10 as core 11 on socket 0 00:08:55.465 EAL: Detected lcore 11 as core 12 on socket 0 00:08:55.465 EAL: Detected lcore 12 as core 13 on socket 0 00:08:55.465 EAL: Detected lcore 13 as core 16 on socket 0 00:08:55.465 EAL: Detected lcore 14 as core 17 on socket 0 00:08:55.465 EAL: Detected lcore 15 as core 18 on socket 0 00:08:55.465 EAL: Detected lcore 16 as core 19 on socket 0 00:08:55.465 EAL: Detected lcore 17 as core 20 on socket 0 00:08:55.465 EAL: Detected lcore 18 as core 21 on socket 0 00:08:55.465 EAL: Detected lcore 19 as core 25 on socket 0 00:08:55.465 EAL: Detected lcore 20 as core 26 on socket 0 00:08:55.465 EAL: Detected lcore 21 as core 27 on socket 0 00:08:55.465 EAL: Detected lcore 22 as core 28 on socket 0 00:08:55.465 EAL: Detected lcore 23 as core 29 on socket 0 00:08:55.465 EAL: Detected lcore 24 as core 0 on socket 1 00:08:55.465 EAL: Detected lcore 25 as core 1 on socket 1 00:08:55.465 EAL: Detected lcore 26 as core 2 on socket 1 00:08:55.465 EAL: Detected lcore 27 as core 3 on socket 1 00:08:55.465 EAL: Detected lcore 28 as core 4 on socket 1 00:08:55.465 EAL: Detected lcore 29 as core 5 on socket 1 00:08:55.465 EAL: Detected lcore 30 as core 6 on socket 1 00:08:55.465 EAL: Detected lcore 31 as core 9 on socket 1 00:08:55.465 EAL: Detected lcore 32 as core 10 on socket 1 00:08:55.465 EAL: Detected lcore 33 as core 11 on socket 1 00:08:55.465 EAL: Detected lcore 34 as core 12 on socket 1 00:08:55.465 EAL: Detected lcore 35 as core 13 on socket 1 00:08:55.465 EAL: Detected lcore 36 as core 16 on socket 1 00:08:55.465 EAL: Detected lcore 37 as core 17 on socket 1 00:08:55.465 EAL: Detected lcore 38 as core 18 on socket 1 00:08:55.465 EAL: Detected lcore 39 as core 19 on socket 1 00:08:55.465 EAL: Detected lcore 40 as core 20 on socket 1 00:08:55.465 EAL: Detected lcore 41 as core 21 on socket 1 00:08:55.465 EAL: Detected lcore 42 as core 24 on socket 1 00:08:55.465 EAL: Detected lcore 43 as core 25 on socket 1 00:08:55.465 EAL: Detected lcore 44 as core 26 on socket 1 00:08:55.465 EAL: Detected lcore 45 as core 27 on socket 1 00:08:55.465 EAL: Detected lcore 46 as core 28 on socket 1 00:08:55.465 EAL: Detected lcore 47 as core 29 on socket 1 00:08:55.465 EAL: Detected lcore 48 as core 0 on socket 0 00:08:55.465 EAL: Detected lcore 49 as core 1 on socket 0 00:08:55.465 EAL: Detected lcore 50 as core 2 on socket 0 00:08:55.465 EAL: Detected lcore 51 as core 3 on socket 0 00:08:55.465 EAL: Detected lcore 52 as core 4 on socket 0 00:08:55.465 EAL: Detected lcore 53 as core 5 on socket 0 00:08:55.465 EAL: Detected lcore 54 as core 6 on socket 0 00:08:55.465 EAL: Detected lcore 55 as core 8 on socket 0 00:08:55.465 EAL: Detected lcore 56 as core 9 on socket 0 00:08:55.465 EAL: Detected lcore 57 as core 10 on socket 0 00:08:55.465 EAL: Detected lcore 58 as core 11 on socket 0 00:08:55.465 EAL: Detected lcore 59 as core 12 on socket 0 00:08:55.465 EAL: Detected lcore 60 as core 13 on socket 0 00:08:55.465 EAL: Detected lcore 61 as core 16 on socket 0 00:08:55.465 EAL: Detected lcore 62 as core 17 on socket 0 00:08:55.465 EAL: Detected lcore 63 as core 18 on socket 0 00:08:55.465 EAL: Detected lcore 64 as core 19 on socket 0 00:08:55.465 EAL: Detected lcore 65 as core 20 on socket 0 00:08:55.465 EAL: Detected lcore 66 as core 21 on socket 0 00:08:55.465 EAL: Detected lcore 67 as core 25 on socket 0 00:08:55.465 EAL: Detected lcore 68 as core 26 on socket 0 00:08:55.465 EAL: Detected lcore 69 as core 27 on socket 0 00:08:55.465 EAL: Detected lcore 70 as core 28 on socket 0 00:08:55.465 EAL: Detected lcore 71 as core 29 on socket 0 00:08:55.465 EAL: Detected lcore 72 as core 0 on socket 1 00:08:55.465 EAL: Detected lcore 73 as core 1 on socket 1 00:08:55.465 EAL: Detected lcore 74 as core 2 on socket 1 00:08:55.465 EAL: Detected lcore 75 as core 3 on socket 1 00:08:55.465 EAL: Detected lcore 76 as core 4 on socket 1 00:08:55.465 EAL: Detected lcore 77 as core 5 on socket 1 00:08:55.465 EAL: Detected lcore 78 as core 6 on socket 1 00:08:55.465 EAL: Detected lcore 79 as core 9 on socket 1 00:08:55.465 EAL: Detected lcore 80 as core 10 on socket 1 00:08:55.465 EAL: Detected lcore 81 as core 11 on socket 1 00:08:55.465 EAL: Detected lcore 82 as core 12 on socket 1 00:08:55.465 EAL: Detected lcore 83 as core 13 on socket 1 00:08:55.465 EAL: Detected lcore 84 as core 16 on socket 1 00:08:55.465 EAL: Detected lcore 85 as core 17 on socket 1 00:08:55.465 EAL: Detected lcore 86 as core 18 on socket 1 00:08:55.465 EAL: Detected lcore 87 as core 19 on socket 1 00:08:55.465 EAL: Detected lcore 88 as core 20 on socket 1 00:08:55.465 EAL: Detected lcore 89 as core 21 on socket 1 00:08:55.465 EAL: Detected lcore 90 as core 24 on socket 1 00:08:55.465 EAL: Detected lcore 91 as core 25 on socket 1 00:08:55.465 EAL: Detected lcore 92 as core 26 on socket 1 00:08:55.465 EAL: Detected lcore 93 as core 27 on socket 1 00:08:55.465 EAL: Detected lcore 94 as core 28 on socket 1 00:08:55.465 EAL: Detected lcore 95 as core 29 on socket 1 00:08:55.465 EAL: Maximum logical cores by configuration: 128 00:08:55.465 EAL: Detected CPU lcores: 96 00:08:55.465 EAL: Detected NUMA nodes: 2 00:08:55.465 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:55.465 EAL: Detected shared linkage of DPDK 00:08:55.465 EAL: No shared files mode enabled, IPC will be disabled 00:08:55.465 EAL: Bus pci wants IOVA as 'DC' 00:08:55.465 EAL: Buses did not request a specific IOVA mode. 00:08:55.465 EAL: IOMMU is available, selecting IOVA as VA mode. 00:08:55.465 EAL: Selected IOVA mode 'VA' 00:08:55.465 EAL: Probing VFIO support... 00:08:55.466 EAL: IOMMU type 1 (Type 1) is supported 00:08:55.466 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:55.466 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:55.466 EAL: VFIO support initialized 00:08:55.466 EAL: Ask a virtual area of 0x2e000 bytes 00:08:55.466 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:55.466 EAL: Setting up physically contiguous memory... 00:08:55.466 EAL: Setting maximum number of open files to 524288 00:08:55.466 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:55.466 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:55.466 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:55.466 EAL: Ask a virtual area of 0x61000 bytes 00:08:55.466 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:55.466 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:55.466 EAL: Ask a virtual area of 0x400000000 bytes 00:08:55.466 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:55.466 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:55.466 EAL: Ask a virtual area of 0x61000 bytes 00:08:55.466 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:55.466 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:55.466 EAL: Ask a virtual area of 0x400000000 bytes 00:08:55.466 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:55.466 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:55.466 EAL: Ask a virtual area of 0x61000 bytes 00:08:55.466 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:55.466 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:55.466 EAL: Ask a virtual area of 0x400000000 bytes 00:08:55.466 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:55.466 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:55.466 EAL: Ask a virtual area of 0x61000 bytes 00:08:55.466 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:55.466 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:55.466 EAL: Ask a virtual area of 0x400000000 bytes 00:08:55.466 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:55.466 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:55.466 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:55.466 EAL: Ask a virtual area of 0x61000 bytes 00:08:55.466 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:55.466 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:55.466 EAL: Ask a virtual area of 0x400000000 bytes 00:08:55.466 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:55.466 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:55.466 EAL: Ask a virtual area of 0x61000 bytes 00:08:55.466 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:55.466 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:55.466 EAL: Ask a virtual area of 0x400000000 bytes 00:08:55.466 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:55.466 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:55.466 EAL: Ask a virtual area of 0x61000 bytes 00:08:55.466 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:55.466 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:55.466 EAL: Ask a virtual area of 0x400000000 bytes 00:08:55.466 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:55.466 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:55.466 EAL: Ask a virtual area of 0x61000 bytes 00:08:55.466 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:55.466 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:55.466 EAL: Ask a virtual area of 0x400000000 bytes 00:08:55.466 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:55.466 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:55.466 EAL: Hugepages will be freed exactly as allocated. 00:08:55.466 EAL: No shared files mode enabled, IPC is disabled 00:08:55.466 EAL: No shared files mode enabled, IPC is disabled 00:08:55.466 EAL: TSC frequency is ~2300000 KHz 00:08:55.466 EAL: Main lcore 0 is ready (tid=7f6333007a00;cpuset=[0]) 00:08:55.466 EAL: Trying to obtain current memory policy. 00:08:55.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:55.466 EAL: Restoring previous memory policy: 0 00:08:55.466 EAL: request: mp_malloc_sync 00:08:55.466 EAL: No shared files mode enabled, IPC is disabled 00:08:55.466 EAL: Heap on socket 0 was expanded by 2MB 00:08:55.466 EAL: No shared files mode enabled, IPC is disabled 00:08:55.466 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:55.466 EAL: Mem event callback 'spdk:(nil)' registered 00:08:55.466 00:08:55.466 00:08:55.466 CUnit - A unit testing framework for C - Version 2.1-3 00:08:55.466 http://cunit.sourceforge.net/ 00:08:55.466 00:08:55.466 00:08:55.466 Suite: components_suite 00:08:55.466 Test: vtophys_malloc_test ...passed 00:08:55.466 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:55.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:55.466 EAL: Restoring previous memory policy: 4 00:08:55.466 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.466 EAL: request: mp_malloc_sync 00:08:55.466 EAL: No shared files mode enabled, IPC is disabled 00:08:55.466 EAL: Heap on socket 0 was expanded by 4MB 00:08:55.466 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.466 EAL: request: mp_malloc_sync 00:08:55.466 EAL: No shared files mode enabled, IPC is disabled 00:08:55.466 EAL: Heap on socket 0 was shrunk by 4MB 00:08:55.466 EAL: Trying to obtain current memory policy. 00:08:55.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:55.466 EAL: Restoring previous memory policy: 4 00:08:55.466 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.466 EAL: request: mp_malloc_sync 00:08:55.466 EAL: No shared files mode enabled, IPC is disabled 00:08:55.466 EAL: Heap on socket 0 was expanded by 6MB 00:08:55.466 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.466 EAL: request: mp_malloc_sync 00:08:55.466 EAL: No shared files mode enabled, IPC is disabled 00:08:55.466 EAL: Heap on socket 0 was shrunk by 6MB 00:08:55.466 EAL: Trying to obtain current memory policy. 00:08:55.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:55.466 EAL: Restoring previous memory policy: 4 00:08:55.466 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.466 EAL: request: mp_malloc_sync 00:08:55.466 EAL: No shared files mode enabled, IPC is disabled 00:08:55.466 EAL: Heap on socket 0 was expanded by 10MB 00:08:55.466 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.466 EAL: request: mp_malloc_sync 00:08:55.466 EAL: No shared files mode enabled, IPC is disabled 00:08:55.466 EAL: Heap on socket 0 was shrunk by 10MB 00:08:55.466 EAL: Trying to obtain current memory policy. 00:08:55.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:55.466 EAL: Restoring previous memory policy: 4 00:08:55.466 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.466 EAL: request: mp_malloc_sync 00:08:55.466 EAL: No shared files mode enabled, IPC is disabled 00:08:55.466 EAL: Heap on socket 0 was expanded by 18MB 00:08:55.466 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.466 EAL: request: mp_malloc_sync 00:08:55.466 EAL: No shared files mode enabled, IPC is disabled 00:08:55.466 EAL: Heap on socket 0 was shrunk by 18MB 00:08:55.466 EAL: Trying to obtain current memory policy. 00:08:55.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:55.466 EAL: Restoring previous memory policy: 4 00:08:55.466 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.466 EAL: request: mp_malloc_sync 00:08:55.466 EAL: No shared files mode enabled, IPC is disabled 00:08:55.466 EAL: Heap on socket 0 was expanded by 34MB 00:08:55.466 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.466 EAL: request: mp_malloc_sync 00:08:55.466 EAL: No shared files mode enabled, IPC is disabled 00:08:55.466 EAL: Heap on socket 0 was shrunk by 34MB 00:08:55.466 EAL: Trying to obtain current memory policy. 00:08:55.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:55.466 EAL: Restoring previous memory policy: 4 00:08:55.466 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.466 EAL: request: mp_malloc_sync 00:08:55.466 EAL: No shared files mode enabled, IPC is disabled 00:08:55.466 EAL: Heap on socket 0 was expanded by 66MB 00:08:55.466 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.466 EAL: request: mp_malloc_sync 00:08:55.466 EAL: No shared files mode enabled, IPC is disabled 00:08:55.466 EAL: Heap on socket 0 was shrunk by 66MB 00:08:55.466 EAL: Trying to obtain current memory policy. 00:08:55.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:55.466 EAL: Restoring previous memory policy: 4 00:08:55.466 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.466 EAL: request: mp_malloc_sync 00:08:55.466 EAL: No shared files mode enabled, IPC is disabled 00:08:55.466 EAL: Heap on socket 0 was expanded by 130MB 00:08:55.466 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.466 EAL: request: mp_malloc_sync 00:08:55.466 EAL: No shared files mode enabled, IPC is disabled 00:08:55.466 EAL: Heap on socket 0 was shrunk by 130MB 00:08:55.466 EAL: Trying to obtain current memory policy. 00:08:55.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:55.724 EAL: Restoring previous memory policy: 4 00:08:55.724 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.724 EAL: request: mp_malloc_sync 00:08:55.724 EAL: No shared files mode enabled, IPC is disabled 00:08:55.724 EAL: Heap on socket 0 was expanded by 258MB 00:08:55.724 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.724 EAL: request: mp_malloc_sync 00:08:55.724 EAL: No shared files mode enabled, IPC is disabled 00:08:55.724 EAL: Heap on socket 0 was shrunk by 258MB 00:08:55.724 EAL: Trying to obtain current memory policy. 00:08:55.724 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:55.724 EAL: Restoring previous memory policy: 4 00:08:55.724 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.724 EAL: request: mp_malloc_sync 00:08:55.724 EAL: No shared files mode enabled, IPC is disabled 00:08:55.724 EAL: Heap on socket 0 was expanded by 514MB 00:08:55.982 EAL: Calling mem event callback 'spdk:(nil)' 00:08:55.982 EAL: request: mp_malloc_sync 00:08:55.982 EAL: No shared files mode enabled, IPC is disabled 00:08:55.982 EAL: Heap on socket 0 was shrunk by 514MB 00:08:55.982 EAL: Trying to obtain current memory policy. 00:08:55.982 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.242 EAL: Restoring previous memory policy: 4 00:08:56.242 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.242 EAL: request: mp_malloc_sync 00:08:56.242 EAL: No shared files mode enabled, IPC is disabled 00:08:56.242 EAL: Heap on socket 0 was expanded by 1026MB 00:08:56.242 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.502 EAL: request: mp_malloc_sync 00:08:56.502 EAL: No shared files mode enabled, IPC is disabled 00:08:56.502 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:56.502 passed 00:08:56.502 00:08:56.502 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.502 suites 1 1 n/a 0 0 00:08:56.502 tests 2 2 2 0 0 00:08:56.502 asserts 497 497 497 0 n/a 00:08:56.502 00:08:56.502 Elapsed time = 0.976 seconds 00:08:56.502 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.502 EAL: request: mp_malloc_sync 00:08:56.502 EAL: No shared files mode enabled, IPC is disabled 00:08:56.502 EAL: Heap on socket 0 was shrunk by 2MB 00:08:56.502 EAL: No shared files mode enabled, IPC is disabled 00:08:56.502 EAL: No shared files mode enabled, IPC is disabled 00:08:56.502 EAL: No shared files mode enabled, IPC is disabled 00:08:56.502 00:08:56.502 real 0m1.104s 00:08:56.502 user 0m0.657s 00:08:56.502 sys 0m0.422s 00:08:56.502 14:28:08 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.502 14:28:08 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:56.502 ************************************ 00:08:56.502 END TEST env_vtophys 00:08:56.502 ************************************ 00:08:56.502 14:28:08 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:56.502 14:28:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.502 14:28:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.502 14:28:08 env -- common/autotest_common.sh@10 -- # set +x 00:08:56.502 ************************************ 00:08:56.502 START TEST env_pci 00:08:56.502 ************************************ 00:08:56.502 14:28:08 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:56.502 00:08:56.502 00:08:56.502 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.502 http://cunit.sourceforge.net/ 00:08:56.502 00:08:56.502 00:08:56.502 Suite: pci 00:08:56.502 Test: pci_hook ...[2024-11-20 14:28:08.374466] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1397266 has claimed it 00:08:56.502 EAL: Cannot find device (10000:00:01.0) 00:08:56.503 EAL: Failed to attach device on primary process 00:08:56.503 passed 00:08:56.503 00:08:56.503 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.503 suites 1 1 n/a 0 0 00:08:56.503 tests 1 1 1 0 0 00:08:56.503 asserts 25 25 25 0 n/a 00:08:56.503 00:08:56.503 Elapsed time = 0.026 seconds 00:08:56.503 00:08:56.503 real 0m0.045s 00:08:56.503 user 0m0.012s 00:08:56.503 sys 0m0.033s 00:08:56.503 14:28:08 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.503 14:28:08 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:56.503 ************************************ 00:08:56.503 END TEST env_pci 00:08:56.503 ************************************ 00:08:56.503 14:28:08 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:56.503 14:28:08 env -- env/env.sh@15 -- # uname 00:08:56.503 14:28:08 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:56.503 14:28:08 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:56.503 14:28:08 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:56.503 14:28:08 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:56.503 14:28:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.503 14:28:08 env -- common/autotest_common.sh@10 -- # set +x 00:08:56.762 ************************************ 00:08:56.762 START TEST env_dpdk_post_init 00:08:56.762 ************************************ 00:08:56.762 14:28:08 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:56.762 EAL: Detected CPU lcores: 96 00:08:56.762 EAL: Detected NUMA nodes: 2 00:08:56.762 EAL: Detected shared linkage of DPDK 00:08:56.762 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:56.762 EAL: Selected IOVA mode 'VA' 00:08:56.762 EAL: VFIO support initialized 00:08:56.762 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:56.762 EAL: Using IOMMU type 1 (Type 1) 00:08:56.762 EAL: Ignore mapping IO port bar(1) 00:08:56.762 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:08:56.762 EAL: Ignore mapping IO port bar(1) 00:08:56.762 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:08:56.762 EAL: Ignore mapping IO port bar(1) 00:08:56.762 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:08:56.762 EAL: Ignore mapping IO port bar(1) 00:08:56.762 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:08:56.762 EAL: Ignore mapping IO port bar(1) 00:08:56.762 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:08:56.762 EAL: Ignore mapping IO port bar(1) 00:08:56.762 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:08:56.762 EAL: Ignore mapping IO port bar(1) 00:08:56.762 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:08:56.762 EAL: Ignore mapping IO port bar(1) 00:08:56.762 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:08:57.699 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:08:57.699 EAL: Ignore mapping IO port bar(1) 00:08:57.699 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:08:57.699 EAL: Ignore mapping IO port bar(1) 00:08:57.699 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:08:57.699 EAL: Ignore mapping IO port bar(1) 00:08:57.699 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:08:57.699 EAL: Ignore mapping IO port bar(1) 00:08:57.699 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:08:57.699 EAL: Ignore mapping IO port bar(1) 00:08:57.700 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:08:57.700 EAL: Ignore mapping IO port bar(1) 00:08:57.700 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:08:57.700 EAL: Ignore mapping IO port bar(1) 00:08:57.700 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:08:57.700 EAL: Ignore mapping IO port bar(1) 00:08:57.700 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:09:00.987 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:09:00.987 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:09:00.987 Starting DPDK initialization... 00:09:00.987 Starting SPDK post initialization... 00:09:00.987 SPDK NVMe probe 00:09:00.987 Attaching to 0000:5e:00.0 00:09:00.987 Attached to 0000:5e:00.0 00:09:00.987 Cleaning up... 00:09:00.987 00:09:00.987 real 0m4.331s 00:09:00.987 user 0m2.952s 00:09:00.987 sys 0m0.452s 00:09:00.987 14:28:12 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.987 14:28:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:00.987 ************************************ 00:09:00.987 END TEST env_dpdk_post_init 00:09:00.987 ************************************ 00:09:00.987 14:28:12 env -- env/env.sh@26 -- # uname 00:09:00.987 14:28:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:00.987 14:28:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:09:00.987 14:28:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.987 14:28:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.987 14:28:12 env -- common/autotest_common.sh@10 -- # set +x 00:09:00.987 ************************************ 00:09:00.987 START TEST env_mem_callbacks 00:09:00.987 ************************************ 00:09:00.987 14:28:12 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:09:00.987 EAL: Detected CPU lcores: 96 00:09:00.987 EAL: Detected NUMA nodes: 2 00:09:00.987 EAL: Detected shared linkage of DPDK 00:09:00.987 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:00.987 EAL: Selected IOVA mode 'VA' 00:09:00.987 EAL: VFIO support initialized 00:09:00.987 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:00.987 00:09:00.987 00:09:00.987 CUnit - A unit testing framework for C - Version 2.1-3 00:09:00.987 http://cunit.sourceforge.net/ 00:09:00.987 00:09:00.987 00:09:00.987 Suite: memory 00:09:00.987 Test: test ... 00:09:00.987 register 0x200000200000 2097152 00:09:00.987 malloc 3145728 00:09:00.987 register 0x200000400000 4194304 00:09:00.987 buf 0x200000500000 len 3145728 PASSED 00:09:00.987 malloc 64 00:09:00.987 buf 0x2000004fff40 len 64 PASSED 00:09:00.987 malloc 4194304 00:09:00.987 register 0x200000800000 6291456 00:09:00.987 buf 0x200000a00000 len 4194304 PASSED 00:09:00.987 free 0x200000500000 3145728 00:09:01.246 free 0x2000004fff40 64 00:09:01.246 unregister 0x200000400000 4194304 PASSED 00:09:01.246 free 0x200000a00000 4194304 00:09:01.246 unregister 0x200000800000 6291456 PASSED 00:09:01.246 malloc 8388608 00:09:01.246 register 0x200000400000 10485760 00:09:01.246 buf 0x200000600000 len 8388608 PASSED 00:09:01.246 free 0x200000600000 8388608 00:09:01.246 unregister 0x200000400000 10485760 PASSED 00:09:01.246 passed 00:09:01.246 00:09:01.246 Run Summary: Type Total Ran Passed Failed Inactive 00:09:01.246 suites 1 1 n/a 0 0 00:09:01.246 tests 1 1 1 0 0 00:09:01.246 asserts 15 15 15 0 n/a 00:09:01.247 00:09:01.247 Elapsed time = 0.008 seconds 00:09:01.247 00:09:01.247 real 0m0.060s 00:09:01.247 user 0m0.026s 00:09:01.247 sys 0m0.034s 00:09:01.247 14:28:12 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.247 14:28:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:01.247 ************************************ 00:09:01.247 END TEST env_mem_callbacks 00:09:01.247 ************************************ 00:09:01.247 00:09:01.247 real 0m6.230s 00:09:01.247 user 0m4.044s 00:09:01.247 sys 0m1.267s 00:09:01.247 14:28:12 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.247 14:28:12 env -- common/autotest_common.sh@10 -- # set +x 00:09:01.247 ************************************ 00:09:01.247 END TEST env 00:09:01.247 ************************************ 00:09:01.247 14:28:13 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:09:01.247 14:28:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.247 14:28:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.247 14:28:13 -- common/autotest_common.sh@10 -- # set +x 00:09:01.247 ************************************ 00:09:01.247 START TEST rpc 00:09:01.247 ************************************ 00:09:01.247 14:28:13 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:09:01.247 * Looking for test storage... 00:09:01.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:01.247 14:28:13 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:01.247 14:28:13 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:01.247 14:28:13 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:01.506 14:28:13 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:01.506 14:28:13 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.506 14:28:13 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.506 14:28:13 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.506 14:28:13 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.506 14:28:13 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.506 14:28:13 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.506 14:28:13 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.506 14:28:13 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.506 14:28:13 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.506 14:28:13 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.506 14:28:13 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.506 14:28:13 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:01.506 14:28:13 rpc -- scripts/common.sh@345 -- # : 1 00:09:01.506 14:28:13 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.506 14:28:13 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.506 14:28:13 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:01.506 14:28:13 rpc -- scripts/common.sh@353 -- # local d=1 00:09:01.506 14:28:13 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.506 14:28:13 rpc -- scripts/common.sh@355 -- # echo 1 00:09:01.506 14:28:13 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.506 14:28:13 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:01.506 14:28:13 rpc -- scripts/common.sh@353 -- # local d=2 00:09:01.506 14:28:13 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.506 14:28:13 rpc -- scripts/common.sh@355 -- # echo 2 00:09:01.506 14:28:13 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.506 14:28:13 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.506 14:28:13 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.506 14:28:13 rpc -- scripts/common.sh@368 -- # return 0 00:09:01.506 14:28:13 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.506 14:28:13 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:01.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.506 --rc genhtml_branch_coverage=1 00:09:01.506 --rc genhtml_function_coverage=1 00:09:01.506 --rc genhtml_legend=1 00:09:01.506 --rc geninfo_all_blocks=1 00:09:01.506 --rc geninfo_unexecuted_blocks=1 00:09:01.506 00:09:01.506 ' 00:09:01.506 14:28:13 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:01.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.506 --rc genhtml_branch_coverage=1 00:09:01.506 --rc genhtml_function_coverage=1 00:09:01.506 --rc genhtml_legend=1 00:09:01.506 --rc geninfo_all_blocks=1 00:09:01.506 --rc geninfo_unexecuted_blocks=1 00:09:01.506 00:09:01.506 ' 00:09:01.506 14:28:13 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:01.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.506 --rc genhtml_branch_coverage=1 00:09:01.506 --rc genhtml_function_coverage=1 00:09:01.506 --rc genhtml_legend=1 00:09:01.506 --rc geninfo_all_blocks=1 00:09:01.506 --rc geninfo_unexecuted_blocks=1 00:09:01.506 00:09:01.506 ' 00:09:01.506 14:28:13 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:01.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.506 --rc genhtml_branch_coverage=1 00:09:01.506 --rc genhtml_function_coverage=1 00:09:01.506 --rc genhtml_legend=1 00:09:01.506 --rc geninfo_all_blocks=1 00:09:01.506 --rc geninfo_unexecuted_blocks=1 00:09:01.506 00:09:01.506 ' 00:09:01.506 14:28:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1398168 00:09:01.506 14:28:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:01.506 14:28:13 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:09:01.506 14:28:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1398168 00:09:01.506 14:28:13 rpc -- common/autotest_common.sh@835 -- # '[' -z 1398168 ']' 00:09:01.506 14:28:13 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.506 14:28:13 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.506 14:28:13 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.506 14:28:13 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.506 14:28:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.506 [2024-11-20 14:28:13.283727] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:01.506 [2024-11-20 14:28:13.283775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398168 ] 00:09:01.506 [2024-11-20 14:28:13.358104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.506 [2024-11-20 14:28:13.397701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:01.506 [2024-11-20 14:28:13.397740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1398168' to capture a snapshot of events at runtime. 00:09:01.506 [2024-11-20 14:28:13.397748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.506 [2024-11-20 14:28:13.397754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.506 [2024-11-20 14:28:13.397758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1398168 for offline analysis/debug. 00:09:01.506 [2024-11-20 14:28:13.398360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.442 14:28:14 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.442 14:28:14 rpc -- common/autotest_common.sh@868 -- # return 0 00:09:02.442 14:28:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:02.442 14:28:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:02.442 14:28:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:02.442 14:28:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:02.442 14:28:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.442 14:28:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.442 14:28:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.442 ************************************ 00:09:02.442 START TEST rpc_integrity 00:09:02.442 ************************************ 00:09:02.442 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:02.442 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:02.442 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.442 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:02.442 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.442 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:02.442 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:02.442 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:02.442 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:02.442 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.442 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:02.442 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.442 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:02.442 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:02.442 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.442 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:02.442 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.442 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:02.442 { 00:09:02.442 "name": "Malloc0", 00:09:02.442 "aliases": [ 00:09:02.442 "3ace94ba-05fe-4f15-a4be-c04d17c211df" 00:09:02.442 ], 00:09:02.442 "product_name": "Malloc disk", 00:09:02.442 "block_size": 512, 00:09:02.442 "num_blocks": 16384, 00:09:02.442 "uuid": "3ace94ba-05fe-4f15-a4be-c04d17c211df", 00:09:02.442 "assigned_rate_limits": { 00:09:02.442 "rw_ios_per_sec": 0, 00:09:02.442 "rw_mbytes_per_sec": 0, 00:09:02.442 "r_mbytes_per_sec": 0, 00:09:02.442 "w_mbytes_per_sec": 0 00:09:02.442 }, 00:09:02.442 "claimed": false, 00:09:02.442 "zoned": false, 00:09:02.442 "supported_io_types": { 00:09:02.442 "read": true, 00:09:02.442 "write": true, 00:09:02.442 "unmap": true, 00:09:02.442 "flush": true, 00:09:02.442 "reset": true, 00:09:02.442 "nvme_admin": false, 00:09:02.442 "nvme_io": false, 00:09:02.442 "nvme_io_md": false, 00:09:02.442 "write_zeroes": true, 00:09:02.442 "zcopy": true, 00:09:02.442 "get_zone_info": false, 00:09:02.442 "zone_management": false, 00:09:02.442 "zone_append": false, 00:09:02.442 "compare": false, 00:09:02.442 "compare_and_write": false, 00:09:02.442 "abort": true, 00:09:02.442 "seek_hole": false, 00:09:02.442 "seek_data": false, 00:09:02.442 "copy": true, 00:09:02.442 "nvme_iov_md": false 00:09:02.442 }, 00:09:02.442 "memory_domains": [ 00:09:02.442 { 00:09:02.442 "dma_device_id": "system", 00:09:02.442 "dma_device_type": 1 00:09:02.442 }, 00:09:02.442 { 00:09:02.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.442 "dma_device_type": 2 00:09:02.442 } 00:09:02.442 ], 00:09:02.442 "driver_specific": {} 00:09:02.442 } 00:09:02.442 ]' 00:09:02.442 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:02.442 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:02.442 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:02.442 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.442 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:02.442 [2024-11-20 14:28:14.275708] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:02.442 [2024-11-20 14:28:14.275737] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.442 [2024-11-20 14:28:14.275750] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x226f280 00:09:02.442 [2024-11-20 14:28:14.275756] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.443 [2024-11-20 14:28:14.276857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.443 [2024-11-20 14:28:14.276878] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:02.443 Passthru0 00:09:02.443 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.443 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:02.443 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.443 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:02.443 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.443 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:02.443 { 00:09:02.443 "name": "Malloc0", 00:09:02.443 "aliases": [ 00:09:02.443 "3ace94ba-05fe-4f15-a4be-c04d17c211df" 00:09:02.443 ], 00:09:02.443 "product_name": "Malloc disk", 00:09:02.443 "block_size": 512, 00:09:02.443 "num_blocks": 16384, 00:09:02.443 "uuid": "3ace94ba-05fe-4f15-a4be-c04d17c211df", 00:09:02.443 "assigned_rate_limits": { 00:09:02.443 "rw_ios_per_sec": 0, 00:09:02.443 "rw_mbytes_per_sec": 0, 00:09:02.443 "r_mbytes_per_sec": 0, 00:09:02.443 "w_mbytes_per_sec": 0 00:09:02.443 }, 00:09:02.443 "claimed": true, 00:09:02.443 "claim_type": "exclusive_write", 00:09:02.443 "zoned": false, 00:09:02.443 "supported_io_types": { 00:09:02.443 "read": true, 00:09:02.443 "write": true, 00:09:02.443 "unmap": true, 00:09:02.443 "flush": true, 00:09:02.443 "reset": true, 00:09:02.443 "nvme_admin": false, 00:09:02.443 "nvme_io": false, 00:09:02.443 "nvme_io_md": false, 00:09:02.443 "write_zeroes": true, 00:09:02.443 "zcopy": true, 00:09:02.443 "get_zone_info": false, 00:09:02.443 "zone_management": false, 00:09:02.443 "zone_append": false, 00:09:02.443 "compare": false, 00:09:02.443 "compare_and_write": false, 00:09:02.443 "abort": true, 00:09:02.443 "seek_hole": false, 00:09:02.443 "seek_data": false, 00:09:02.443 "copy": true, 00:09:02.443 "nvme_iov_md": false 00:09:02.443 }, 00:09:02.443 "memory_domains": [ 00:09:02.443 { 00:09:02.443 "dma_device_id": "system", 00:09:02.443 "dma_device_type": 1 00:09:02.443 }, 00:09:02.443 { 00:09:02.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.443 "dma_device_type": 2 00:09:02.443 } 00:09:02.443 ], 00:09:02.443 "driver_specific": {} 00:09:02.443 }, 00:09:02.443 { 00:09:02.443 "name": "Passthru0", 00:09:02.443 "aliases": [ 00:09:02.443 "c2813d4c-8cbd-5588-b74b-27cf44c1e6b1" 00:09:02.443 ], 00:09:02.443 "product_name": "passthru", 00:09:02.443 "block_size": 512, 00:09:02.443 "num_blocks": 16384, 00:09:02.443 "uuid": "c2813d4c-8cbd-5588-b74b-27cf44c1e6b1", 00:09:02.443 "assigned_rate_limits": { 00:09:02.443 "rw_ios_per_sec": 0, 00:09:02.443 "rw_mbytes_per_sec": 0, 00:09:02.443 "r_mbytes_per_sec": 0, 00:09:02.443 "w_mbytes_per_sec": 0 00:09:02.443 }, 00:09:02.443 "claimed": false, 00:09:02.443 "zoned": false, 00:09:02.443 "supported_io_types": { 00:09:02.443 "read": true, 00:09:02.443 "write": true, 00:09:02.443 "unmap": true, 00:09:02.443 "flush": true, 00:09:02.443 "reset": true, 00:09:02.443 "nvme_admin": false, 00:09:02.443 "nvme_io": false, 00:09:02.443 "nvme_io_md": false, 00:09:02.443 "write_zeroes": true, 00:09:02.443 "zcopy": true, 00:09:02.443 "get_zone_info": false, 00:09:02.443 "zone_management": false, 00:09:02.443 "zone_append": false, 00:09:02.443 "compare": false, 00:09:02.443 "compare_and_write": false, 00:09:02.443 "abort": true, 00:09:02.443 "seek_hole": false, 00:09:02.443 "seek_data": false, 00:09:02.443 "copy": true, 00:09:02.443 "nvme_iov_md": false 00:09:02.443 }, 00:09:02.443 "memory_domains": [ 00:09:02.443 { 00:09:02.443 "dma_device_id": "system", 00:09:02.443 "dma_device_type": 1 00:09:02.443 }, 00:09:02.443 { 00:09:02.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.443 "dma_device_type": 2 00:09:02.443 } 00:09:02.443 ], 00:09:02.443 "driver_specific": { 00:09:02.443 "passthru": { 00:09:02.443 "name": "Passthru0", 00:09:02.443 "base_bdev_name": "Malloc0" 00:09:02.443 } 00:09:02.443 } 00:09:02.443 } 00:09:02.443 ]' 00:09:02.443 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:02.443 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:02.443 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:02.443 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.443 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:02.443 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.443 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:02.443 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.443 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:02.443 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.443 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:02.443 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.443 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:02.443 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.443 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:02.443 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:02.702 14:28:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:02.702 00:09:02.702 real 0m0.277s 00:09:02.702 user 0m0.176s 00:09:02.702 sys 0m0.038s 00:09:02.702 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.702 14:28:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:02.702 ************************************ 00:09:02.702 END TEST rpc_integrity 00:09:02.702 ************************************ 00:09:02.702 14:28:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:02.702 14:28:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.702 14:28:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.702 14:28:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.702 ************************************ 00:09:02.702 START TEST rpc_plugins 00:09:02.702 ************************************ 00:09:02.702 14:28:14 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:09:02.702 14:28:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:02.702 14:28:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.702 14:28:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:02.702 14:28:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.702 14:28:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:02.702 14:28:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:02.702 14:28:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.702 14:28:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:02.702 14:28:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.702 14:28:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:02.702 { 00:09:02.702 "name": "Malloc1", 00:09:02.702 "aliases": [ 00:09:02.702 "e34d2d7b-7c4c-4a0d-bfa8-21b83d7b313e" 00:09:02.702 ], 00:09:02.702 "product_name": "Malloc disk", 00:09:02.702 "block_size": 4096, 00:09:02.702 "num_blocks": 256, 00:09:02.702 "uuid": "e34d2d7b-7c4c-4a0d-bfa8-21b83d7b313e", 00:09:02.702 "assigned_rate_limits": { 00:09:02.702 "rw_ios_per_sec": 0, 00:09:02.702 "rw_mbytes_per_sec": 0, 00:09:02.702 "r_mbytes_per_sec": 0, 00:09:02.702 "w_mbytes_per_sec": 0 00:09:02.702 }, 00:09:02.702 "claimed": false, 00:09:02.702 "zoned": false, 00:09:02.702 "supported_io_types": { 00:09:02.702 "read": true, 00:09:02.703 "write": true, 00:09:02.703 "unmap": true, 00:09:02.703 "flush": true, 00:09:02.703 "reset": true, 00:09:02.703 "nvme_admin": false, 00:09:02.703 "nvme_io": false, 00:09:02.703 "nvme_io_md": false, 00:09:02.703 "write_zeroes": true, 00:09:02.703 "zcopy": true, 00:09:02.703 "get_zone_info": false, 00:09:02.703 "zone_management": false, 00:09:02.703 "zone_append": false, 00:09:02.703 "compare": false, 00:09:02.703 "compare_and_write": false, 00:09:02.703 "abort": true, 00:09:02.703 "seek_hole": false, 00:09:02.703 "seek_data": false, 00:09:02.703 "copy": true, 00:09:02.703 "nvme_iov_md": false 00:09:02.703 }, 00:09:02.703 "memory_domains": [ 00:09:02.703 { 00:09:02.703 "dma_device_id": "system", 00:09:02.703 "dma_device_type": 1 00:09:02.703 }, 00:09:02.703 { 00:09:02.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.703 "dma_device_type": 2 00:09:02.703 } 00:09:02.703 ], 00:09:02.703 "driver_specific": {} 00:09:02.703 } 00:09:02.703 ]' 00:09:02.703 14:28:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:02.703 14:28:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:02.703 14:28:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:02.703 14:28:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.703 14:28:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:02.703 14:28:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.703 14:28:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:02.703 14:28:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.703 14:28:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:02.703 14:28:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.703 14:28:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:02.703 14:28:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:02.703 14:28:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:02.703 00:09:02.703 real 0m0.142s 00:09:02.703 user 0m0.091s 00:09:02.703 sys 0m0.014s 00:09:02.703 14:28:14 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.703 14:28:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:02.703 ************************************ 00:09:02.703 END TEST rpc_plugins 00:09:02.703 ************************************ 00:09:02.962 14:28:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:02.962 14:28:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.962 14:28:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.962 14:28:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.962 ************************************ 00:09:02.962 START TEST rpc_trace_cmd_test 00:09:02.962 ************************************ 00:09:02.962 14:28:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:09:02.962 14:28:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:02.962 14:28:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:02.962 14:28:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.962 14:28:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.962 14:28:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.962 14:28:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:02.962 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1398168", 00:09:02.962 "tpoint_group_mask": "0x8", 00:09:02.962 "iscsi_conn": { 00:09:02.962 "mask": "0x2", 00:09:02.962 "tpoint_mask": "0x0" 00:09:02.962 }, 00:09:02.962 "scsi": { 00:09:02.962 "mask": "0x4", 00:09:02.962 "tpoint_mask": "0x0" 00:09:02.962 }, 00:09:02.962 "bdev": { 00:09:02.962 "mask": "0x8", 00:09:02.962 "tpoint_mask": "0xffffffffffffffff" 00:09:02.962 }, 00:09:02.962 "nvmf_rdma": { 00:09:02.962 "mask": "0x10", 00:09:02.962 "tpoint_mask": "0x0" 00:09:02.962 }, 00:09:02.962 "nvmf_tcp": { 00:09:02.962 "mask": "0x20", 00:09:02.962 "tpoint_mask": "0x0" 00:09:02.962 }, 00:09:02.962 "ftl": { 00:09:02.962 "mask": "0x40", 00:09:02.962 "tpoint_mask": "0x0" 00:09:02.962 }, 00:09:02.962 "blobfs": { 00:09:02.962 "mask": "0x80", 00:09:02.962 "tpoint_mask": "0x0" 00:09:02.962 }, 00:09:02.962 "dsa": { 00:09:02.962 "mask": "0x200", 00:09:02.962 "tpoint_mask": "0x0" 00:09:02.962 }, 00:09:02.962 "thread": { 00:09:02.962 "mask": "0x400", 00:09:02.962 "tpoint_mask": "0x0" 00:09:02.962 }, 00:09:02.962 "nvme_pcie": { 00:09:02.962 "mask": "0x800", 00:09:02.963 "tpoint_mask": "0x0" 00:09:02.963 }, 00:09:02.963 "iaa": { 00:09:02.963 "mask": "0x1000", 00:09:02.963 "tpoint_mask": "0x0" 00:09:02.963 }, 00:09:02.963 "nvme_tcp": { 00:09:02.963 "mask": "0x2000", 00:09:02.963 "tpoint_mask": "0x0" 00:09:02.963 }, 00:09:02.963 "bdev_nvme": { 00:09:02.963 "mask": "0x4000", 00:09:02.963 "tpoint_mask": "0x0" 00:09:02.963 }, 00:09:02.963 "sock": { 00:09:02.963 "mask": "0x8000", 00:09:02.963 "tpoint_mask": "0x0" 00:09:02.963 }, 00:09:02.963 "blob": { 00:09:02.963 "mask": "0x10000", 00:09:02.963 "tpoint_mask": "0x0" 00:09:02.963 }, 00:09:02.963 "bdev_raid": { 00:09:02.963 "mask": "0x20000", 00:09:02.963 "tpoint_mask": "0x0" 00:09:02.963 }, 00:09:02.963 "scheduler": { 00:09:02.963 "mask": "0x40000", 00:09:02.963 "tpoint_mask": "0x0" 00:09:02.963 } 00:09:02.963 }' 00:09:02.963 14:28:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:02.963 14:28:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:02.963 14:28:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:02.963 14:28:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:02.963 14:28:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:02.963 14:28:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:02.963 14:28:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:02.963 14:28:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:02.963 14:28:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:03.222 14:28:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:03.222 00:09:03.222 real 0m0.224s 00:09:03.222 user 0m0.192s 00:09:03.222 sys 0m0.025s 00:09:03.222 14:28:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.222 14:28:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.222 ************************************ 00:09:03.222 END TEST rpc_trace_cmd_test 00:09:03.222 ************************************ 00:09:03.222 14:28:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:03.222 14:28:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:03.222 14:28:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:03.222 14:28:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.222 14:28:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.222 14:28:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.222 ************************************ 00:09:03.222 START TEST rpc_daemon_integrity 00:09:03.222 ************************************ 00:09:03.222 14:28:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:03.222 14:28:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:03.222 14:28:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.222 14:28:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.222 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.222 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:03.222 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:03.222 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:03.222 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:03.222 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.222 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.222 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.222 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:03.222 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:03.222 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.222 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.222 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.222 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:03.222 { 00:09:03.222 "name": "Malloc2", 00:09:03.222 "aliases": [ 00:09:03.222 "9530458f-9df8-49cf-95b8-79aa2b19a2d6" 00:09:03.222 ], 00:09:03.222 "product_name": "Malloc disk", 00:09:03.222 "block_size": 512, 00:09:03.222 "num_blocks": 16384, 00:09:03.222 "uuid": "9530458f-9df8-49cf-95b8-79aa2b19a2d6", 00:09:03.222 "assigned_rate_limits": { 00:09:03.222 "rw_ios_per_sec": 0, 00:09:03.222 "rw_mbytes_per_sec": 0, 00:09:03.222 "r_mbytes_per_sec": 0, 00:09:03.222 "w_mbytes_per_sec": 0 00:09:03.222 }, 00:09:03.222 "claimed": false, 00:09:03.222 "zoned": false, 00:09:03.222 "supported_io_types": { 00:09:03.222 "read": true, 00:09:03.222 "write": true, 00:09:03.222 "unmap": true, 00:09:03.222 "flush": true, 00:09:03.222 "reset": true, 00:09:03.222 "nvme_admin": false, 00:09:03.222 "nvme_io": false, 00:09:03.222 "nvme_io_md": false, 00:09:03.222 "write_zeroes": true, 00:09:03.222 "zcopy": true, 00:09:03.222 "get_zone_info": false, 00:09:03.222 "zone_management": false, 00:09:03.222 "zone_append": false, 00:09:03.222 "compare": false, 00:09:03.222 "compare_and_write": false, 00:09:03.222 "abort": true, 00:09:03.222 "seek_hole": false, 00:09:03.222 "seek_data": false, 00:09:03.222 "copy": true, 00:09:03.222 "nvme_iov_md": false 00:09:03.222 }, 00:09:03.222 "memory_domains": [ 00:09:03.222 { 00:09:03.222 "dma_device_id": "system", 00:09:03.222 "dma_device_type": 1 00:09:03.223 }, 00:09:03.223 { 00:09:03.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.223 "dma_device_type": 2 00:09:03.223 } 00:09:03.223 ], 00:09:03.223 "driver_specific": {} 00:09:03.223 } 00:09:03.223 ]' 00:09:03.223 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:03.223 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:03.223 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:03.223 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.223 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.223 [2024-11-20 14:28:15.126031] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:03.223 [2024-11-20 14:28:15.126060] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.223 [2024-11-20 14:28:15.126072] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2271150 00:09:03.223 [2024-11-20 14:28:15.126078] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.223 [2024-11-20 14:28:15.127074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.223 [2024-11-20 14:28:15.127093] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:03.223 Passthru0 00:09:03.223 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.223 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:03.223 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.223 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.223 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.223 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:03.223 { 00:09:03.223 "name": "Malloc2", 00:09:03.223 "aliases": [ 00:09:03.223 "9530458f-9df8-49cf-95b8-79aa2b19a2d6" 00:09:03.223 ], 00:09:03.223 "product_name": "Malloc disk", 00:09:03.223 "block_size": 512, 00:09:03.223 "num_blocks": 16384, 00:09:03.223 "uuid": "9530458f-9df8-49cf-95b8-79aa2b19a2d6", 00:09:03.223 "assigned_rate_limits": { 00:09:03.223 "rw_ios_per_sec": 0, 00:09:03.223 "rw_mbytes_per_sec": 0, 00:09:03.223 "r_mbytes_per_sec": 0, 00:09:03.223 "w_mbytes_per_sec": 0 00:09:03.223 }, 00:09:03.223 "claimed": true, 00:09:03.223 "claim_type": "exclusive_write", 00:09:03.223 "zoned": false, 00:09:03.223 "supported_io_types": { 00:09:03.223 "read": true, 00:09:03.223 "write": true, 00:09:03.223 "unmap": true, 00:09:03.223 "flush": true, 00:09:03.223 "reset": true, 00:09:03.223 "nvme_admin": false, 00:09:03.223 "nvme_io": false, 00:09:03.223 "nvme_io_md": false, 00:09:03.223 "write_zeroes": true, 00:09:03.223 "zcopy": true, 00:09:03.223 "get_zone_info": false, 00:09:03.223 "zone_management": false, 00:09:03.223 "zone_append": false, 00:09:03.223 "compare": false, 00:09:03.223 "compare_and_write": false, 00:09:03.223 "abort": true, 00:09:03.223 "seek_hole": false, 00:09:03.223 "seek_data": false, 00:09:03.223 "copy": true, 00:09:03.223 "nvme_iov_md": false 00:09:03.223 }, 00:09:03.223 "memory_domains": [ 00:09:03.223 { 00:09:03.223 "dma_device_id": "system", 00:09:03.223 "dma_device_type": 1 00:09:03.223 }, 00:09:03.223 { 00:09:03.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.223 "dma_device_type": 2 00:09:03.223 } 00:09:03.223 ], 00:09:03.223 "driver_specific": {} 00:09:03.223 }, 00:09:03.223 { 00:09:03.223 "name": "Passthru0", 00:09:03.223 "aliases": [ 00:09:03.223 "a2da26c2-6564-59a1-8e43-d58a3d4f9b13" 00:09:03.223 ], 00:09:03.223 "product_name": "passthru", 00:09:03.223 "block_size": 512, 00:09:03.223 "num_blocks": 16384, 00:09:03.223 "uuid": "a2da26c2-6564-59a1-8e43-d58a3d4f9b13", 00:09:03.223 "assigned_rate_limits": { 00:09:03.223 "rw_ios_per_sec": 0, 00:09:03.223 "rw_mbytes_per_sec": 0, 00:09:03.223 "r_mbytes_per_sec": 0, 00:09:03.223 "w_mbytes_per_sec": 0 00:09:03.223 }, 00:09:03.223 "claimed": false, 00:09:03.223 "zoned": false, 00:09:03.223 "supported_io_types": { 00:09:03.223 "read": true, 00:09:03.223 "write": true, 00:09:03.223 "unmap": true, 00:09:03.223 "flush": true, 00:09:03.223 "reset": true, 00:09:03.223 "nvme_admin": false, 00:09:03.223 "nvme_io": false, 00:09:03.223 "nvme_io_md": false, 00:09:03.223 "write_zeroes": true, 00:09:03.223 "zcopy": true, 00:09:03.223 "get_zone_info": false, 00:09:03.223 "zone_management": false, 00:09:03.223 "zone_append": false, 00:09:03.223 "compare": false, 00:09:03.223 "compare_and_write": false, 00:09:03.223 "abort": true, 00:09:03.223 "seek_hole": false, 00:09:03.223 "seek_data": false, 00:09:03.223 "copy": true, 00:09:03.223 "nvme_iov_md": false 00:09:03.223 }, 00:09:03.223 "memory_domains": [ 00:09:03.223 { 00:09:03.223 "dma_device_id": "system", 00:09:03.223 "dma_device_type": 1 00:09:03.223 }, 00:09:03.223 { 00:09:03.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.223 "dma_device_type": 2 00:09:03.223 } 00:09:03.223 ], 00:09:03.223 "driver_specific": { 00:09:03.223 "passthru": { 00:09:03.223 "name": "Passthru0", 00:09:03.223 "base_bdev_name": "Malloc2" 00:09:03.223 } 00:09:03.223 } 00:09:03.223 } 00:09:03.223 ]' 00:09:03.223 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:03.482 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:03.482 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:03.482 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.482 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.482 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.482 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:03.482 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.482 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.482 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.482 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:03.482 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.482 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.482 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.482 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:03.482 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:03.482 14:28:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:03.482 00:09:03.482 real 0m0.280s 00:09:03.482 user 0m0.171s 00:09:03.482 sys 0m0.042s 00:09:03.482 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.482 14:28:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.482 ************************************ 00:09:03.482 END TEST rpc_daemon_integrity 00:09:03.482 ************************************ 00:09:03.482 14:28:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:03.482 14:28:15 rpc -- rpc/rpc.sh@84 -- # killprocess 1398168 00:09:03.482 14:28:15 rpc -- common/autotest_common.sh@954 -- # '[' -z 1398168 ']' 00:09:03.482 14:28:15 rpc -- common/autotest_common.sh@958 -- # kill -0 1398168 00:09:03.482 14:28:15 rpc -- common/autotest_common.sh@959 -- # uname 00:09:03.482 14:28:15 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.483 14:28:15 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1398168 00:09:03.483 14:28:15 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.483 14:28:15 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.483 14:28:15 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1398168' 00:09:03.483 killing process with pid 1398168 00:09:03.483 14:28:15 rpc -- common/autotest_common.sh@973 -- # kill 1398168 00:09:03.483 14:28:15 rpc -- common/autotest_common.sh@978 -- # wait 1398168 00:09:03.742 00:09:03.742 real 0m2.598s 00:09:03.742 user 0m3.336s 00:09:03.742 sys 0m0.715s 00:09:03.742 14:28:15 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.742 14:28:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.742 ************************************ 00:09:03.742 END TEST rpc 00:09:03.742 ************************************ 00:09:03.742 14:28:15 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:09:03.742 14:28:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.742 14:28:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.742 14:28:15 -- common/autotest_common.sh@10 -- # set +x 00:09:04.001 ************************************ 00:09:04.001 START TEST skip_rpc 00:09:04.001 ************************************ 00:09:04.001 14:28:15 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:09:04.001 * Looking for test storage... 00:09:04.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:04.001 14:28:15 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:04.001 14:28:15 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:04.001 14:28:15 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:04.001 14:28:15 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.001 14:28:15 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:04.001 14:28:15 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.001 14:28:15 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:04.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.001 --rc genhtml_branch_coverage=1 00:09:04.001 --rc genhtml_function_coverage=1 00:09:04.001 --rc genhtml_legend=1 00:09:04.001 --rc geninfo_all_blocks=1 00:09:04.001 --rc geninfo_unexecuted_blocks=1 00:09:04.001 00:09:04.001 ' 00:09:04.001 14:28:15 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:04.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.001 --rc genhtml_branch_coverage=1 00:09:04.001 --rc genhtml_function_coverage=1 00:09:04.001 --rc genhtml_legend=1 00:09:04.001 --rc geninfo_all_blocks=1 00:09:04.001 --rc geninfo_unexecuted_blocks=1 00:09:04.001 00:09:04.001 ' 00:09:04.001 14:28:15 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:04.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.001 --rc genhtml_branch_coverage=1 00:09:04.001 --rc genhtml_function_coverage=1 00:09:04.001 --rc genhtml_legend=1 00:09:04.001 --rc geninfo_all_blocks=1 00:09:04.001 --rc geninfo_unexecuted_blocks=1 00:09:04.001 00:09:04.001 ' 00:09:04.001 14:28:15 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:04.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.001 --rc genhtml_branch_coverage=1 00:09:04.001 --rc genhtml_function_coverage=1 00:09:04.001 --rc genhtml_legend=1 00:09:04.001 --rc geninfo_all_blocks=1 00:09:04.001 --rc geninfo_unexecuted_blocks=1 00:09:04.001 00:09:04.001 ' 00:09:04.001 14:28:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:04.001 14:28:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:09:04.001 14:28:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:04.001 14:28:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:04.001 14:28:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.001 14:28:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.001 ************************************ 00:09:04.001 START TEST skip_rpc 00:09:04.001 ************************************ 00:09:04.001 14:28:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:09:04.001 14:28:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1398807 00:09:04.001 14:28:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:04.001 14:28:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:04.001 14:28:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:04.261 [2024-11-20 14:28:15.989584] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:04.261 [2024-11-20 14:28:15.989622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398807 ] 00:09:04.261 [2024-11-20 14:28:16.062149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.261 [2024-11-20 14:28:16.102282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1398807 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1398807 ']' 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1398807 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.529 14:28:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1398807 00:09:09.529 14:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.529 14:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.529 14:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1398807' 00:09:09.529 killing process with pid 1398807 00:09:09.529 14:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1398807 00:09:09.529 14:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1398807 00:09:09.529 00:09:09.529 real 0m5.365s 00:09:09.529 user 0m5.126s 00:09:09.529 sys 0m0.279s 00:09:09.529 14:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.529 14:28:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.529 ************************************ 00:09:09.529 END TEST skip_rpc 00:09:09.529 ************************************ 00:09:09.529 14:28:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:09.529 14:28:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.529 14:28:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.529 14:28:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.529 ************************************ 00:09:09.529 START TEST skip_rpc_with_json 00:09:09.529 ************************************ 00:09:09.529 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:09:09.529 14:28:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:09.529 14:28:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1399760 00:09:09.529 14:28:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:09.529 14:28:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:09.529 14:28:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1399760 00:09:09.529 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1399760 ']' 00:09:09.529 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.529 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.529 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.529 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.529 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:09.529 [2024-11-20 14:28:21.420432] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:09.529 [2024-11-20 14:28:21.420472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1399760 ] 00:09:09.788 [2024-11-20 14:28:21.496961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.788 [2024-11-20 14:28:21.539550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.047 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.047 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:09:10.047 14:28:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:10.047 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.047 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:10.047 [2024-11-20 14:28:21.757378] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:10.047 request: 00:09:10.047 { 00:09:10.047 "trtype": "tcp", 00:09:10.047 "method": "nvmf_get_transports", 00:09:10.047 "req_id": 1 00:09:10.047 } 00:09:10.047 Got JSON-RPC error response 00:09:10.047 response: 00:09:10.047 { 00:09:10.047 "code": -19, 00:09:10.047 "message": "No such device" 00:09:10.047 } 00:09:10.047 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:10.047 14:28:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:10.047 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.047 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:10.047 [2024-11-20 14:28:21.769495] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.047 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.047 14:28:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:10.047 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.047 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:10.047 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.048 14:28:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:10.048 { 00:09:10.048 "subsystems": [ 00:09:10.048 { 00:09:10.048 "subsystem": "fsdev", 00:09:10.048 "config": [ 00:09:10.048 { 00:09:10.048 "method": "fsdev_set_opts", 00:09:10.048 "params": { 00:09:10.048 "fsdev_io_pool_size": 65535, 00:09:10.048 "fsdev_io_cache_size": 256 00:09:10.048 } 00:09:10.048 } 00:09:10.048 ] 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "subsystem": "vfio_user_target", 00:09:10.048 "config": null 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "subsystem": "keyring", 00:09:10.048 "config": [] 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "subsystem": "iobuf", 00:09:10.048 "config": [ 00:09:10.048 { 00:09:10.048 "method": "iobuf_set_options", 00:09:10.048 "params": { 00:09:10.048 "small_pool_count": 8192, 00:09:10.048 "large_pool_count": 1024, 00:09:10.048 "small_bufsize": 8192, 00:09:10.048 "large_bufsize": 135168, 00:09:10.048 "enable_numa": false 00:09:10.048 } 00:09:10.048 } 00:09:10.048 ] 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "subsystem": "sock", 00:09:10.048 "config": [ 00:09:10.048 { 00:09:10.048 "method": "sock_set_default_impl", 00:09:10.048 "params": { 00:09:10.048 "impl_name": "posix" 00:09:10.048 } 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "method": "sock_impl_set_options", 00:09:10.048 "params": { 00:09:10.048 "impl_name": "ssl", 00:09:10.048 "recv_buf_size": 4096, 00:09:10.048 "send_buf_size": 4096, 00:09:10.048 "enable_recv_pipe": true, 00:09:10.048 "enable_quickack": false, 00:09:10.048 "enable_placement_id": 0, 00:09:10.048 "enable_zerocopy_send_server": true, 00:09:10.048 "enable_zerocopy_send_client": false, 00:09:10.048 "zerocopy_threshold": 0, 00:09:10.048 "tls_version": 0, 00:09:10.048 "enable_ktls": false 00:09:10.048 } 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "method": "sock_impl_set_options", 00:09:10.048 "params": { 00:09:10.048 "impl_name": "posix", 00:09:10.048 "recv_buf_size": 2097152, 00:09:10.048 "send_buf_size": 2097152, 00:09:10.048 "enable_recv_pipe": true, 00:09:10.048 "enable_quickack": false, 00:09:10.048 "enable_placement_id": 0, 00:09:10.048 "enable_zerocopy_send_server": true, 00:09:10.048 "enable_zerocopy_send_client": false, 00:09:10.048 "zerocopy_threshold": 0, 00:09:10.048 "tls_version": 0, 00:09:10.048 "enable_ktls": false 00:09:10.048 } 00:09:10.048 } 00:09:10.048 ] 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "subsystem": "vmd", 00:09:10.048 "config": [] 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "subsystem": "accel", 00:09:10.048 "config": [ 00:09:10.048 { 00:09:10.048 "method": "accel_set_options", 00:09:10.048 "params": { 00:09:10.048 "small_cache_size": 128, 00:09:10.048 "large_cache_size": 16, 00:09:10.048 "task_count": 2048, 00:09:10.048 "sequence_count": 2048, 00:09:10.048 "buf_count": 2048 00:09:10.048 } 00:09:10.048 } 00:09:10.048 ] 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "subsystem": "bdev", 00:09:10.048 "config": [ 00:09:10.048 { 00:09:10.048 "method": "bdev_set_options", 00:09:10.048 "params": { 00:09:10.048 "bdev_io_pool_size": 65535, 00:09:10.048 "bdev_io_cache_size": 256, 00:09:10.048 "bdev_auto_examine": true, 00:09:10.048 "iobuf_small_cache_size": 128, 00:09:10.048 "iobuf_large_cache_size": 16 00:09:10.048 } 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "method": "bdev_raid_set_options", 00:09:10.048 "params": { 00:09:10.048 "process_window_size_kb": 1024, 00:09:10.048 "process_max_bandwidth_mb_sec": 0 00:09:10.048 } 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "method": "bdev_iscsi_set_options", 00:09:10.048 "params": { 00:09:10.048 "timeout_sec": 30 00:09:10.048 } 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "method": "bdev_nvme_set_options", 00:09:10.048 "params": { 00:09:10.048 "action_on_timeout": "none", 00:09:10.048 "timeout_us": 0, 00:09:10.048 "timeout_admin_us": 0, 00:09:10.048 "keep_alive_timeout_ms": 10000, 00:09:10.048 "arbitration_burst": 0, 00:09:10.048 "low_priority_weight": 0, 00:09:10.048 "medium_priority_weight": 0, 00:09:10.048 "high_priority_weight": 0, 00:09:10.048 "nvme_adminq_poll_period_us": 10000, 00:09:10.048 "nvme_ioq_poll_period_us": 0, 00:09:10.048 "io_queue_requests": 0, 00:09:10.048 "delay_cmd_submit": true, 00:09:10.048 "transport_retry_count": 4, 00:09:10.048 "bdev_retry_count": 3, 00:09:10.048 "transport_ack_timeout": 0, 00:09:10.048 "ctrlr_loss_timeout_sec": 0, 00:09:10.048 "reconnect_delay_sec": 0, 00:09:10.048 "fast_io_fail_timeout_sec": 0, 00:09:10.048 "disable_auto_failback": false, 00:09:10.048 "generate_uuids": false, 00:09:10.048 "transport_tos": 0, 00:09:10.048 "nvme_error_stat": false, 00:09:10.048 "rdma_srq_size": 0, 00:09:10.048 "io_path_stat": false, 00:09:10.048 "allow_accel_sequence": false, 00:09:10.048 "rdma_max_cq_size": 0, 00:09:10.048 "rdma_cm_event_timeout_ms": 0, 00:09:10.048 "dhchap_digests": [ 00:09:10.048 "sha256", 00:09:10.048 "sha384", 00:09:10.048 "sha512" 00:09:10.048 ], 00:09:10.048 "dhchap_dhgroups": [ 00:09:10.048 "null", 00:09:10.048 "ffdhe2048", 00:09:10.048 "ffdhe3072", 00:09:10.048 "ffdhe4096", 00:09:10.048 "ffdhe6144", 00:09:10.048 "ffdhe8192" 00:09:10.048 ] 00:09:10.048 } 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "method": "bdev_nvme_set_hotplug", 00:09:10.048 "params": { 00:09:10.048 "period_us": 100000, 00:09:10.048 "enable": false 00:09:10.048 } 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "method": "bdev_wait_for_examine" 00:09:10.048 } 00:09:10.048 ] 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "subsystem": "scsi", 00:09:10.048 "config": null 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "subsystem": "scheduler", 00:09:10.048 "config": [ 00:09:10.048 { 00:09:10.048 "method": "framework_set_scheduler", 00:09:10.048 "params": { 00:09:10.048 "name": "static" 00:09:10.048 } 00:09:10.048 } 00:09:10.048 ] 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "subsystem": "vhost_scsi", 00:09:10.048 "config": [] 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "subsystem": "vhost_blk", 00:09:10.048 "config": [] 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "subsystem": "ublk", 00:09:10.048 "config": [] 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "subsystem": "nbd", 00:09:10.048 "config": [] 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "subsystem": "nvmf", 00:09:10.048 "config": [ 00:09:10.048 { 00:09:10.048 "method": "nvmf_set_config", 00:09:10.048 "params": { 00:09:10.048 "discovery_filter": "match_any", 00:09:10.048 "admin_cmd_passthru": { 00:09:10.048 "identify_ctrlr": false 00:09:10.048 }, 00:09:10.048 "dhchap_digests": [ 00:09:10.048 "sha256", 00:09:10.048 "sha384", 00:09:10.048 "sha512" 00:09:10.048 ], 00:09:10.048 "dhchap_dhgroups": [ 00:09:10.048 "null", 00:09:10.048 "ffdhe2048", 00:09:10.048 "ffdhe3072", 00:09:10.048 "ffdhe4096", 00:09:10.048 "ffdhe6144", 00:09:10.048 "ffdhe8192" 00:09:10.048 ] 00:09:10.048 } 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "method": "nvmf_set_max_subsystems", 00:09:10.048 "params": { 00:09:10.048 "max_subsystems": 1024 00:09:10.048 } 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "method": "nvmf_set_crdt", 00:09:10.048 "params": { 00:09:10.048 "crdt1": 0, 00:09:10.048 "crdt2": 0, 00:09:10.048 "crdt3": 0 00:09:10.048 } 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "method": "nvmf_create_transport", 00:09:10.048 "params": { 00:09:10.048 "trtype": "TCP", 00:09:10.048 "max_queue_depth": 128, 00:09:10.048 "max_io_qpairs_per_ctrlr": 127, 00:09:10.048 "in_capsule_data_size": 4096, 00:09:10.048 "max_io_size": 131072, 00:09:10.048 "io_unit_size": 131072, 00:09:10.048 "max_aq_depth": 128, 00:09:10.048 "num_shared_buffers": 511, 00:09:10.048 "buf_cache_size": 4294967295, 00:09:10.048 "dif_insert_or_strip": false, 00:09:10.048 "zcopy": false, 00:09:10.048 "c2h_success": true, 00:09:10.048 "sock_priority": 0, 00:09:10.048 "abort_timeout_sec": 1, 00:09:10.048 "ack_timeout": 0, 00:09:10.048 "data_wr_pool_size": 0 00:09:10.048 } 00:09:10.048 } 00:09:10.048 ] 00:09:10.048 }, 00:09:10.048 { 00:09:10.048 "subsystem": "iscsi", 00:09:10.048 "config": [ 00:09:10.048 { 00:09:10.048 "method": "iscsi_set_options", 00:09:10.048 "params": { 00:09:10.048 "node_base": "iqn.2016-06.io.spdk", 00:09:10.048 "max_sessions": 128, 00:09:10.048 "max_connections_per_session": 2, 00:09:10.048 "max_queue_depth": 64, 00:09:10.048 "default_time2wait": 2, 00:09:10.048 "default_time2retain": 20, 00:09:10.049 "first_burst_length": 8192, 00:09:10.049 "immediate_data": true, 00:09:10.049 "allow_duplicated_isid": false, 00:09:10.049 "error_recovery_level": 0, 00:09:10.049 "nop_timeout": 60, 00:09:10.049 "nop_in_interval": 30, 00:09:10.049 "disable_chap": false, 00:09:10.049 "require_chap": false, 00:09:10.049 "mutual_chap": false, 00:09:10.049 "chap_group": 0, 00:09:10.049 "max_large_datain_per_connection": 64, 00:09:10.049 "max_r2t_per_connection": 4, 00:09:10.049 "pdu_pool_size": 36864, 00:09:10.049 "immediate_data_pool_size": 16384, 00:09:10.049 "data_out_pool_size": 2048 00:09:10.049 } 00:09:10.049 } 00:09:10.049 ] 00:09:10.049 } 00:09:10.049 ] 00:09:10.049 } 00:09:10.049 14:28:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:10.049 14:28:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1399760 00:09:10.049 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1399760 ']' 00:09:10.049 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1399760 00:09:10.049 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:10.049 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.049 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1399760 00:09:10.049 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.049 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.049 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1399760' 00:09:10.049 killing process with pid 1399760 00:09:10.049 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1399760 00:09:10.049 14:28:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1399760 00:09:10.617 14:28:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1399991 00:09:10.617 14:28:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:10.617 14:28:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:15.887 14:28:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1399991 00:09:15.887 14:28:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1399991 ']' 00:09:15.887 14:28:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1399991 00:09:15.887 14:28:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:15.887 14:28:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.887 14:28:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1399991 00:09:15.887 14:28:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1399991' 00:09:15.888 killing process with pid 1399991 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1399991 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1399991 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:09:15.888 00:09:15.888 real 0m6.293s 00:09:15.888 user 0m6.009s 00:09:15.888 sys 0m0.583s 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:15.888 ************************************ 00:09:15.888 END TEST skip_rpc_with_json 00:09:15.888 ************************************ 00:09:15.888 14:28:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:15.888 14:28:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.888 14:28:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.888 14:28:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.888 ************************************ 00:09:15.888 START TEST skip_rpc_with_delay 00:09:15.888 ************************************ 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:15.888 [2024-11-20 14:28:27.787047] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:15.888 00:09:15.888 real 0m0.067s 00:09:15.888 user 0m0.044s 00:09:15.888 sys 0m0.023s 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.888 14:28:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:15.888 ************************************ 00:09:15.888 END TEST skip_rpc_with_delay 00:09:15.888 ************************************ 00:09:15.888 14:28:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:15.888 14:28:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:15.888 14:28:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:15.888 14:28:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.888 14:28:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.888 14:28:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.147 ************************************ 00:09:16.147 START TEST exit_on_failed_rpc_init 00:09:16.147 ************************************ 00:09:16.147 14:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:09:16.147 14:28:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1400965 00:09:16.147 14:28:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1400965 00:09:16.147 14:28:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:16.147 14:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1400965 ']' 00:09:16.147 14:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.147 14:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.147 14:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.147 14:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.148 14:28:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:16.148 [2024-11-20 14:28:27.931151] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:16.148 [2024-11-20 14:28:27.931201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1400965 ] 00:09:16.148 [2024-11-20 14:28:28.009391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.148 [2024-11-20 14:28:28.049327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.407 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.407 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:09:16.407 14:28:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:16.407 14:28:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:16.407 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:09:16.407 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:16.407 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:16.407 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.407 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:16.407 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.407 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:16.407 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.407 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:16.407 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:09:16.407 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:16.407 [2024-11-20 14:28:28.327147] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:16.407 [2024-11-20 14:28:28.327188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1400971 ] 00:09:16.666 [2024-11-20 14:28:28.401252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.666 [2024-11-20 14:28:28.442593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.666 [2024-11-20 14:28:28.442652] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:16.666 [2024-11-20 14:28:28.442661] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:16.666 [2024-11-20 14:28:28.442670] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:16.666 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:09:16.666 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:16.666 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:09:16.666 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:09:16.666 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:09:16.666 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:16.666 14:28:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:16.666 14:28:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1400965 00:09:16.666 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1400965 ']' 00:09:16.666 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1400965 00:09:16.666 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:09:16.666 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.666 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1400965 00:09:16.666 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.666 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.666 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1400965' 00:09:16.666 killing process with pid 1400965 00:09:16.666 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1400965 00:09:16.666 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1400965 00:09:16.925 00:09:16.925 real 0m0.971s 00:09:16.925 user 0m1.035s 00:09:16.925 sys 0m0.391s 00:09:16.925 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.925 14:28:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:16.925 ************************************ 00:09:16.925 END TEST exit_on_failed_rpc_init 00:09:16.925 ************************************ 00:09:16.925 14:28:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:17.184 00:09:17.184 real 0m13.157s 00:09:17.184 user 0m12.414s 00:09:17.184 sys 0m1.567s 00:09:17.184 14:28:28 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.184 14:28:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.184 ************************************ 00:09:17.184 END TEST skip_rpc 00:09:17.184 ************************************ 00:09:17.184 14:28:28 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:09:17.184 14:28:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:17.184 14:28:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.184 14:28:28 -- common/autotest_common.sh@10 -- # set +x 00:09:17.184 ************************************ 00:09:17.184 START TEST rpc_client 00:09:17.184 ************************************ 00:09:17.184 14:28:28 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:09:17.184 * Looking for test storage... 00:09:17.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:09:17.184 14:28:29 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:17.184 14:28:29 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:09:17.185 14:28:29 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:17.185 14:28:29 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.185 14:28:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:17.185 14:28:29 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.185 14:28:29 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:17.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.185 --rc genhtml_branch_coverage=1 00:09:17.185 --rc genhtml_function_coverage=1 00:09:17.185 --rc genhtml_legend=1 00:09:17.185 --rc geninfo_all_blocks=1 00:09:17.185 --rc geninfo_unexecuted_blocks=1 00:09:17.185 00:09:17.185 ' 00:09:17.185 14:28:29 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:17.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.185 --rc genhtml_branch_coverage=1 00:09:17.185 --rc genhtml_function_coverage=1 00:09:17.185 --rc genhtml_legend=1 00:09:17.185 --rc geninfo_all_blocks=1 00:09:17.185 --rc geninfo_unexecuted_blocks=1 00:09:17.185 00:09:17.185 ' 00:09:17.185 14:28:29 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:17.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.185 --rc genhtml_branch_coverage=1 00:09:17.185 --rc genhtml_function_coverage=1 00:09:17.185 --rc genhtml_legend=1 00:09:17.185 --rc geninfo_all_blocks=1 00:09:17.185 --rc geninfo_unexecuted_blocks=1 00:09:17.185 00:09:17.185 ' 00:09:17.185 14:28:29 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:17.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.185 --rc genhtml_branch_coverage=1 00:09:17.185 --rc genhtml_function_coverage=1 00:09:17.185 --rc genhtml_legend=1 00:09:17.185 --rc geninfo_all_blocks=1 00:09:17.185 --rc geninfo_unexecuted_blocks=1 00:09:17.185 00:09:17.185 ' 00:09:17.185 14:28:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:09:17.444 OK 00:09:17.444 14:28:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:17.444 00:09:17.444 real 0m0.202s 00:09:17.444 user 0m0.122s 00:09:17.444 sys 0m0.093s 00:09:17.444 14:28:29 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.444 14:28:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:17.444 ************************************ 00:09:17.444 END TEST rpc_client 00:09:17.444 ************************************ 00:09:17.444 14:28:29 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:09:17.444 14:28:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:17.444 14:28:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.444 14:28:29 -- common/autotest_common.sh@10 -- # set +x 00:09:17.444 ************************************ 00:09:17.444 START TEST json_config 00:09:17.444 ************************************ 00:09:17.444 14:28:29 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:09:17.444 14:28:29 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:17.444 14:28:29 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:09:17.444 14:28:29 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:17.444 14:28:29 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:17.444 14:28:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.444 14:28:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.444 14:28:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.444 14:28:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.444 14:28:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.444 14:28:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.444 14:28:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.444 14:28:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.444 14:28:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.444 14:28:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.444 14:28:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.444 14:28:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:17.444 14:28:29 json_config -- scripts/common.sh@345 -- # : 1 00:09:17.444 14:28:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.444 14:28:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.444 14:28:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:17.444 14:28:29 json_config -- scripts/common.sh@353 -- # local d=1 00:09:17.444 14:28:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.444 14:28:29 json_config -- scripts/common.sh@355 -- # echo 1 00:09:17.444 14:28:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.444 14:28:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:17.444 14:28:29 json_config -- scripts/common.sh@353 -- # local d=2 00:09:17.444 14:28:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.444 14:28:29 json_config -- scripts/common.sh@355 -- # echo 2 00:09:17.444 14:28:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.444 14:28:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.444 14:28:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.444 14:28:29 json_config -- scripts/common.sh@368 -- # return 0 00:09:17.444 14:28:29 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.444 14:28:29 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:17.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.444 --rc genhtml_branch_coverage=1 00:09:17.444 --rc genhtml_function_coverage=1 00:09:17.444 --rc genhtml_legend=1 00:09:17.444 --rc geninfo_all_blocks=1 00:09:17.444 --rc geninfo_unexecuted_blocks=1 00:09:17.444 00:09:17.444 ' 00:09:17.444 14:28:29 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:17.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.444 --rc genhtml_branch_coverage=1 00:09:17.444 --rc genhtml_function_coverage=1 00:09:17.444 --rc genhtml_legend=1 00:09:17.444 --rc geninfo_all_blocks=1 00:09:17.444 --rc geninfo_unexecuted_blocks=1 00:09:17.444 00:09:17.444 ' 00:09:17.444 14:28:29 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:17.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.444 --rc genhtml_branch_coverage=1 00:09:17.444 --rc genhtml_function_coverage=1 00:09:17.444 --rc genhtml_legend=1 00:09:17.444 --rc geninfo_all_blocks=1 00:09:17.444 --rc geninfo_unexecuted_blocks=1 00:09:17.444 00:09:17.444 ' 00:09:17.444 14:28:29 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:17.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.444 --rc genhtml_branch_coverage=1 00:09:17.444 --rc genhtml_function_coverage=1 00:09:17.444 --rc genhtml_legend=1 00:09:17.444 --rc geninfo_all_blocks=1 00:09:17.444 --rc geninfo_unexecuted_blocks=1 00:09:17.444 00:09:17.444 ' 00:09:17.444 14:28:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.444 14:28:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:17.444 14:28:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.444 14:28:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.444 14:28:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.444 14:28:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.444 14:28:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.444 14:28:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.444 14:28:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.444 14:28:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.444 14:28:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.444 14:28:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.444 14:28:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:17.444 14:28:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:17.444 14:28:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.444 14:28:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.444 14:28:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:17.444 14:28:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.444 14:28:29 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:17.444 14:28:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:17.444 14:28:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.444 14:28:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.444 14:28:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.444 14:28:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.704 14:28:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.704 14:28:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.704 14:28:29 json_config -- paths/export.sh@5 -- # export PATH 00:09:17.705 14:28:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.705 14:28:29 json_config -- nvmf/common.sh@51 -- # : 0 00:09:17.705 14:28:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:17.705 14:28:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:17.705 14:28:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.705 14:28:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.705 14:28:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.705 14:28:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:17.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:17.705 14:28:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:17.705 14:28:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:17.705 14:28:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:09:17.705 INFO: JSON configuration test init 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:09:17.705 14:28:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:17.705 14:28:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:09:17.705 14:28:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:17.705 14:28:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:17.705 14:28:29 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:09:17.705 14:28:29 json_config -- json_config/common.sh@9 -- # local app=target 00:09:17.705 14:28:29 json_config -- json_config/common.sh@10 -- # shift 00:09:17.705 14:28:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:17.705 14:28:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:17.705 14:28:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:17.705 14:28:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:17.705 14:28:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:17.705 14:28:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1401325 00:09:17.705 14:28:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:17.705 Waiting for target to run... 00:09:17.705 14:28:29 json_config -- json_config/common.sh@25 -- # waitforlisten 1401325 /var/tmp/spdk_tgt.sock 00:09:17.705 14:28:29 json_config -- common/autotest_common.sh@835 -- # '[' -z 1401325 ']' 00:09:17.705 14:28:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:17.705 14:28:29 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:17.705 14:28:29 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.705 14:28:29 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:17.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:17.705 14:28:29 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.705 14:28:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:17.705 [2024-11-20 14:28:29.477830] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:17.705 [2024-11-20 14:28:29.477879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1401325 ] 00:09:18.273 [2024-11-20 14:28:29.928749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.273 [2024-11-20 14:28:29.979336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.532 14:28:30 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.532 14:28:30 json_config -- common/autotest_common.sh@868 -- # return 0 00:09:18.532 14:28:30 json_config -- json_config/common.sh@26 -- # echo '' 00:09:18.532 00:09:18.532 14:28:30 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:09:18.532 14:28:30 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:09:18.532 14:28:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:18.532 14:28:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:18.532 14:28:30 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:09:18.532 14:28:30 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:09:18.532 14:28:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:18.532 14:28:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:18.532 14:28:30 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:18.532 14:28:30 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:09:18.532 14:28:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:21.819 14:28:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:21.819 14:28:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:09:21.819 14:28:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@51 -- # local get_types 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@54 -- # sort 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:09:21.819 14:28:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:21.819 14:28:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@62 -- # return 0 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:09:21.819 14:28:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:21.819 14:28:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:09:21.819 14:28:33 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:21.820 14:28:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:22.078 MallocForNvmf0 00:09:22.078 14:28:33 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:22.078 14:28:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:22.337 MallocForNvmf1 00:09:22.337 14:28:34 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:09:22.337 14:28:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:09:22.337 [2024-11-20 14:28:34.287386] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.596 14:28:34 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:22.596 14:28:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:22.596 14:28:34 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:22.596 14:28:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:22.855 14:28:34 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:22.856 14:28:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:23.115 14:28:34 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:23.115 14:28:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:23.374 [2024-11-20 14:28:35.085856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:23.374 14:28:35 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:09:23.374 14:28:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.374 14:28:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:23.374 14:28:35 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:09:23.374 14:28:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.374 14:28:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:23.374 14:28:35 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:09:23.374 14:28:35 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:23.374 14:28:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:23.633 MallocBdevForConfigChangeCheck 00:09:23.633 14:28:35 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:09:23.633 14:28:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.633 14:28:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:23.633 14:28:35 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:09:23.633 14:28:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:23.891 14:28:35 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:09:23.891 INFO: shutting down applications... 00:09:23.891 14:28:35 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:09:23.891 14:28:35 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:09:23.891 14:28:35 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:09:23.892 14:28:35 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:25.799 Calling clear_iscsi_subsystem 00:09:25.799 Calling clear_nvmf_subsystem 00:09:25.799 Calling clear_nbd_subsystem 00:09:25.799 Calling clear_ublk_subsystem 00:09:25.799 Calling clear_vhost_blk_subsystem 00:09:25.799 Calling clear_vhost_scsi_subsystem 00:09:25.799 Calling clear_bdev_subsystem 00:09:25.799 14:28:37 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:09:25.799 14:28:37 json_config -- json_config/json_config.sh@350 -- # count=100 00:09:25.799 14:28:37 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:09:25.799 14:28:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:25.799 14:28:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:25.799 14:28:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:09:25.799 14:28:37 json_config -- json_config/json_config.sh@352 -- # break 00:09:25.799 14:28:37 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:09:25.799 14:28:37 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:09:25.799 14:28:37 json_config -- json_config/common.sh@31 -- # local app=target 00:09:25.799 14:28:37 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:25.799 14:28:37 json_config -- json_config/common.sh@35 -- # [[ -n 1401325 ]] 00:09:25.799 14:28:37 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1401325 00:09:25.799 14:28:37 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:25.799 14:28:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:25.799 14:28:37 json_config -- json_config/common.sh@41 -- # kill -0 1401325 00:09:25.799 14:28:37 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:26.368 14:28:38 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:26.368 14:28:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:26.368 14:28:38 json_config -- json_config/common.sh@41 -- # kill -0 1401325 00:09:26.368 14:28:38 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:26.368 14:28:38 json_config -- json_config/common.sh@43 -- # break 00:09:26.368 14:28:38 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:26.368 14:28:38 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:26.368 SPDK target shutdown done 00:09:26.368 14:28:38 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:09:26.368 INFO: relaunching applications... 00:09:26.368 14:28:38 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:26.368 14:28:38 json_config -- json_config/common.sh@9 -- # local app=target 00:09:26.368 14:28:38 json_config -- json_config/common.sh@10 -- # shift 00:09:26.368 14:28:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:26.368 14:28:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:26.368 14:28:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:26.368 14:28:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:26.368 14:28:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:26.368 14:28:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1402845 00:09:26.368 14:28:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:26.368 Waiting for target to run... 00:09:26.368 14:28:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:26.368 14:28:38 json_config -- json_config/common.sh@25 -- # waitforlisten 1402845 /var/tmp/spdk_tgt.sock 00:09:26.368 14:28:38 json_config -- common/autotest_common.sh@835 -- # '[' -z 1402845 ']' 00:09:26.368 14:28:38 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:26.368 14:28:38 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.368 14:28:38 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:26.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:26.368 14:28:38 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.368 14:28:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:26.368 [2024-11-20 14:28:38.249044] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:26.368 [2024-11-20 14:28:38.249101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1402845 ] 00:09:26.937 [2024-11-20 14:28:38.726696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.937 [2024-11-20 14:28:38.778230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.339 [2024-11-20 14:28:41.808904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.339 [2024-11-20 14:28:41.841284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:30.598 14:28:42 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.598 14:28:42 json_config -- common/autotest_common.sh@868 -- # return 0 00:09:30.598 14:28:42 json_config -- json_config/common.sh@26 -- # echo '' 00:09:30.598 00:09:30.598 14:28:42 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:09:30.598 14:28:42 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:30.598 INFO: Checking if target configuration is the same... 00:09:30.598 14:28:42 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:09:30.598 14:28:42 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:30.598 14:28:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:30.598 + '[' 2 -ne 2 ']' 00:09:30.598 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:30.598 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:09:30.598 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:30.598 +++ basename /dev/fd/62 00:09:30.598 ++ mktemp /tmp/62.XXX 00:09:30.598 + tmp_file_1=/tmp/62.SqG 00:09:30.598 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:30.598 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:30.598 + tmp_file_2=/tmp/spdk_tgt_config.json.CMa 00:09:30.598 + ret=0 00:09:30.598 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:31.167 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:31.167 + diff -u /tmp/62.SqG /tmp/spdk_tgt_config.json.CMa 00:09:31.167 + echo 'INFO: JSON config files are the same' 00:09:31.167 INFO: JSON config files are the same 00:09:31.167 + rm /tmp/62.SqG /tmp/spdk_tgt_config.json.CMa 00:09:31.167 + exit 0 00:09:31.167 14:28:42 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:09:31.167 14:28:42 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:31.167 INFO: changing configuration and checking if this can be detected... 00:09:31.167 14:28:42 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:31.167 14:28:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:31.168 14:28:43 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:31.168 14:28:43 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:09:31.168 14:28:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:31.168 + '[' 2 -ne 2 ']' 00:09:31.168 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:31.168 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:09:31.168 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:31.168 +++ basename /dev/fd/62 00:09:31.168 ++ mktemp /tmp/62.XXX 00:09:31.168 + tmp_file_1=/tmp/62.zwQ 00:09:31.168 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:31.168 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:31.168 + tmp_file_2=/tmp/spdk_tgt_config.json.pHg 00:09:31.168 + ret=0 00:09:31.168 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:31.739 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:31.739 + diff -u /tmp/62.zwQ /tmp/spdk_tgt_config.json.pHg 00:09:31.739 + ret=1 00:09:31.739 + echo '=== Start of file: /tmp/62.zwQ ===' 00:09:31.739 + cat /tmp/62.zwQ 00:09:31.739 + echo '=== End of file: /tmp/62.zwQ ===' 00:09:31.739 + echo '' 00:09:31.739 + echo '=== Start of file: /tmp/spdk_tgt_config.json.pHg ===' 00:09:31.739 + cat /tmp/spdk_tgt_config.json.pHg 00:09:31.739 + echo '=== End of file: /tmp/spdk_tgt_config.json.pHg ===' 00:09:31.739 + echo '' 00:09:31.739 + rm /tmp/62.zwQ /tmp/spdk_tgt_config.json.pHg 00:09:31.739 + exit 1 00:09:31.739 14:28:43 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:09:31.739 INFO: configuration change detected. 00:09:31.739 14:28:43 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:09:31.739 14:28:43 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:09:31.739 14:28:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.739 14:28:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:31.739 14:28:43 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:09:31.739 14:28:43 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:09:31.739 14:28:43 json_config -- json_config/json_config.sh@324 -- # [[ -n 1402845 ]] 00:09:31.739 14:28:43 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:09:31.739 14:28:43 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:09:31.739 14:28:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.739 14:28:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:31.739 14:28:43 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:09:31.739 14:28:43 json_config -- json_config/json_config.sh@200 -- # uname -s 00:09:31.739 14:28:43 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:09:31.739 14:28:43 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:09:31.739 14:28:43 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:09:31.739 14:28:43 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:09:31.739 14:28:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.739 14:28:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:31.739 14:28:43 json_config -- json_config/json_config.sh@330 -- # killprocess 1402845 00:09:31.739 14:28:43 json_config -- common/autotest_common.sh@954 -- # '[' -z 1402845 ']' 00:09:31.739 14:28:43 json_config -- common/autotest_common.sh@958 -- # kill -0 1402845 00:09:31.739 14:28:43 json_config -- common/autotest_common.sh@959 -- # uname 00:09:31.739 14:28:43 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.739 14:28:43 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1402845 00:09:31.739 14:28:43 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.739 14:28:43 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.739 14:28:43 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1402845' 00:09:31.739 killing process with pid 1402845 00:09:31.739 14:28:43 json_config -- common/autotest_common.sh@973 -- # kill 1402845 00:09:31.739 14:28:43 json_config -- common/autotest_common.sh@978 -- # wait 1402845 00:09:33.647 14:28:45 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:33.647 14:28:45 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:09:33.647 14:28:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:33.647 14:28:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:33.647 14:28:45 json_config -- json_config/json_config.sh@335 -- # return 0 00:09:33.648 14:28:45 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:09:33.648 INFO: Success 00:09:33.648 00:09:33.648 real 0m15.902s 00:09:33.648 user 0m16.413s 00:09:33.648 sys 0m2.746s 00:09:33.648 14:28:45 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.648 14:28:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:33.648 ************************************ 00:09:33.648 END TEST json_config 00:09:33.648 ************************************ 00:09:33.648 14:28:45 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:33.648 14:28:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.648 14:28:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.648 14:28:45 -- common/autotest_common.sh@10 -- # set +x 00:09:33.648 ************************************ 00:09:33.648 START TEST json_config_extra_key 00:09:33.648 ************************************ 00:09:33.648 14:28:45 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:33.648 14:28:45 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:33.648 14:28:45 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:09:33.648 14:28:45 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:33.648 14:28:45 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:33.648 14:28:45 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.648 14:28:45 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:33.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.648 --rc genhtml_branch_coverage=1 00:09:33.648 --rc genhtml_function_coverage=1 00:09:33.648 --rc genhtml_legend=1 00:09:33.648 --rc geninfo_all_blocks=1 00:09:33.648 --rc geninfo_unexecuted_blocks=1 00:09:33.648 00:09:33.648 ' 00:09:33.648 14:28:45 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:33.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.648 --rc genhtml_branch_coverage=1 00:09:33.648 --rc genhtml_function_coverage=1 00:09:33.648 --rc genhtml_legend=1 00:09:33.648 --rc geninfo_all_blocks=1 00:09:33.648 --rc geninfo_unexecuted_blocks=1 00:09:33.648 00:09:33.648 ' 00:09:33.648 14:28:45 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:33.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.648 --rc genhtml_branch_coverage=1 00:09:33.648 --rc genhtml_function_coverage=1 00:09:33.648 --rc genhtml_legend=1 00:09:33.648 --rc geninfo_all_blocks=1 00:09:33.648 --rc geninfo_unexecuted_blocks=1 00:09:33.648 00:09:33.648 ' 00:09:33.648 14:28:45 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:33.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.648 --rc genhtml_branch_coverage=1 00:09:33.648 --rc genhtml_function_coverage=1 00:09:33.648 --rc genhtml_legend=1 00:09:33.648 --rc geninfo_all_blocks=1 00:09:33.648 --rc geninfo_unexecuted_blocks=1 00:09:33.648 00:09:33.648 ' 00:09:33.648 14:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.648 14:28:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:33.648 14:28:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.648 14:28:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.648 14:28:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.648 14:28:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.648 14:28:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.648 14:28:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.648 14:28:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.648 14:28:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.648 14:28:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.648 14:28:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.648 14:28:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:33.648 14:28:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:33.648 14:28:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.648 14:28:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.648 14:28:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:33.648 14:28:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.648 14:28:45 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.648 14:28:45 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.648 14:28:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.649 14:28:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.649 14:28:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.649 14:28:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:33.649 14:28:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.649 14:28:45 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:33.649 14:28:45 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.649 14:28:45 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.649 14:28:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.649 14:28:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.649 14:28:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.649 14:28:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.649 14:28:45 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.649 14:28:45 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.649 14:28:45 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.649 14:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:09:33.649 14:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:33.649 14:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:33.649 14:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:33.649 14:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:33.649 14:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:33.649 14:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:33.649 14:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:09:33.649 14:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:33.649 14:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:33.649 14:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:33.649 INFO: launching applications... 00:09:33.649 14:28:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:33.649 14:28:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:33.649 14:28:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:33.649 14:28:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:33.649 14:28:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:33.649 14:28:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:33.649 14:28:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:33.649 14:28:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:33.649 14:28:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1404211 00:09:33.649 14:28:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:33.649 Waiting for target to run... 00:09:33.649 14:28:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1404211 /var/tmp/spdk_tgt.sock 00:09:33.649 14:28:45 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1404211 ']' 00:09:33.649 14:28:45 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:33.649 14:28:45 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:33.649 14:28:45 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.649 14:28:45 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:33.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:33.649 14:28:45 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.649 14:28:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:33.649 [2024-11-20 14:28:45.444460] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:33.649 [2024-11-20 14:28:45.444513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1404211 ] 00:09:33.908 [2024-11-20 14:28:45.736719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.908 [2024-11-20 14:28:45.771469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.475 14:28:46 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.475 14:28:46 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:09:34.476 14:28:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:34.476 00:09:34.476 14:28:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:34.476 INFO: shutting down applications... 00:09:34.476 14:28:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:34.476 14:28:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:34.476 14:28:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:34.476 14:28:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1404211 ]] 00:09:34.476 14:28:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1404211 00:09:34.476 14:28:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:34.476 14:28:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:34.476 14:28:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1404211 00:09:34.476 14:28:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:35.043 14:28:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:35.044 14:28:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:35.044 14:28:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1404211 00:09:35.044 14:28:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:35.044 14:28:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:35.044 14:28:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:35.044 14:28:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:35.044 SPDK target shutdown done 00:09:35.044 14:28:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:35.044 Success 00:09:35.044 00:09:35.044 real 0m1.584s 00:09:35.044 user 0m1.363s 00:09:35.044 sys 0m0.406s 00:09:35.044 14:28:46 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.044 14:28:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:35.044 ************************************ 00:09:35.044 END TEST json_config_extra_key 00:09:35.044 ************************************ 00:09:35.044 14:28:46 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:35.044 14:28:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.044 14:28:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.044 14:28:46 -- common/autotest_common.sh@10 -- # set +x 00:09:35.044 ************************************ 00:09:35.044 START TEST alias_rpc 00:09:35.044 ************************************ 00:09:35.044 14:28:46 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:35.044 * Looking for test storage... 00:09:35.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:09:35.044 14:28:46 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:35.044 14:28:46 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:35.044 14:28:46 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:35.303 14:28:47 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.303 14:28:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:35.303 14:28:47 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.303 14:28:47 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:35.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.303 --rc genhtml_branch_coverage=1 00:09:35.303 --rc genhtml_function_coverage=1 00:09:35.303 --rc genhtml_legend=1 00:09:35.303 --rc geninfo_all_blocks=1 00:09:35.303 --rc geninfo_unexecuted_blocks=1 00:09:35.303 00:09:35.303 ' 00:09:35.303 14:28:47 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:35.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.303 --rc genhtml_branch_coverage=1 00:09:35.303 --rc genhtml_function_coverage=1 00:09:35.303 --rc genhtml_legend=1 00:09:35.303 --rc geninfo_all_blocks=1 00:09:35.303 --rc geninfo_unexecuted_blocks=1 00:09:35.303 00:09:35.303 ' 00:09:35.303 14:28:47 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:35.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.303 --rc genhtml_branch_coverage=1 00:09:35.303 --rc genhtml_function_coverage=1 00:09:35.303 --rc genhtml_legend=1 00:09:35.303 --rc geninfo_all_blocks=1 00:09:35.303 --rc geninfo_unexecuted_blocks=1 00:09:35.303 00:09:35.303 ' 00:09:35.303 14:28:47 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:35.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.303 --rc genhtml_branch_coverage=1 00:09:35.303 --rc genhtml_function_coverage=1 00:09:35.303 --rc genhtml_legend=1 00:09:35.303 --rc geninfo_all_blocks=1 00:09:35.303 --rc geninfo_unexecuted_blocks=1 00:09:35.303 00:09:35.303 ' 00:09:35.303 14:28:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:35.303 14:28:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1404637 00:09:35.303 14:28:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:35.303 14:28:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1404637 00:09:35.303 14:28:47 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1404637 ']' 00:09:35.303 14:28:47 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.303 14:28:47 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.303 14:28:47 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.303 14:28:47 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.303 14:28:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.303 [2024-11-20 14:28:47.086321] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:35.303 [2024-11-20 14:28:47.086370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1404637 ] 00:09:35.303 [2024-11-20 14:28:47.146038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.303 [2024-11-20 14:28:47.186118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.562 14:28:47 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.562 14:28:47 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:35.562 14:28:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:09:35.820 14:28:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1404637 00:09:35.821 14:28:47 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1404637 ']' 00:09:35.821 14:28:47 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1404637 00:09:35.821 14:28:47 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:09:35.821 14:28:47 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.821 14:28:47 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1404637 00:09:35.821 14:28:47 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.821 14:28:47 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.821 14:28:47 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1404637' 00:09:35.821 killing process with pid 1404637 00:09:35.821 14:28:47 alias_rpc -- common/autotest_common.sh@973 -- # kill 1404637 00:09:35.821 14:28:47 alias_rpc -- common/autotest_common.sh@978 -- # wait 1404637 00:09:36.080 00:09:36.080 real 0m1.134s 00:09:36.080 user 0m1.176s 00:09:36.080 sys 0m0.399s 00:09:36.080 14:28:47 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.080 14:28:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.080 ************************************ 00:09:36.080 END TEST alias_rpc 00:09:36.080 ************************************ 00:09:36.080 14:28:48 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:36.080 14:28:48 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:36.080 14:28:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.080 14:28:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.080 14:28:48 -- common/autotest_common.sh@10 -- # set +x 00:09:36.339 ************************************ 00:09:36.339 START TEST spdkcli_tcp 00:09:36.339 ************************************ 00:09:36.339 14:28:48 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:36.339 * Looking for test storage... 00:09:36.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:09:36.339 14:28:48 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.339 14:28:48 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.339 14:28:48 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.339 14:28:48 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.339 14:28:48 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:36.339 14:28:48 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.339 14:28:48 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.339 --rc genhtml_branch_coverage=1 00:09:36.339 --rc genhtml_function_coverage=1 00:09:36.339 --rc genhtml_legend=1 00:09:36.339 --rc geninfo_all_blocks=1 00:09:36.339 --rc geninfo_unexecuted_blocks=1 00:09:36.339 00:09:36.339 ' 00:09:36.339 14:28:48 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.339 --rc genhtml_branch_coverage=1 00:09:36.339 --rc genhtml_function_coverage=1 00:09:36.339 --rc genhtml_legend=1 00:09:36.339 --rc geninfo_all_blocks=1 00:09:36.339 --rc geninfo_unexecuted_blocks=1 00:09:36.339 00:09:36.339 ' 00:09:36.339 14:28:48 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.339 --rc genhtml_branch_coverage=1 00:09:36.339 --rc genhtml_function_coverage=1 00:09:36.339 --rc genhtml_legend=1 00:09:36.339 --rc geninfo_all_blocks=1 00:09:36.339 --rc geninfo_unexecuted_blocks=1 00:09:36.339 00:09:36.339 ' 00:09:36.339 14:28:48 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.339 --rc genhtml_branch_coverage=1 00:09:36.339 --rc genhtml_function_coverage=1 00:09:36.339 --rc genhtml_legend=1 00:09:36.339 --rc geninfo_all_blocks=1 00:09:36.339 --rc geninfo_unexecuted_blocks=1 00:09:36.339 00:09:36.339 ' 00:09:36.339 14:28:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:09:36.339 14:28:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:09:36.339 14:28:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:09:36.339 14:28:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:36.339 14:28:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:36.339 14:28:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:36.339 14:28:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:36.339 14:28:48 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:36.339 14:28:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:36.339 14:28:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1404834 00:09:36.339 14:28:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:36.339 14:28:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1404834 00:09:36.339 14:28:48 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1404834 ']' 00:09:36.339 14:28:48 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.339 14:28:48 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.339 14:28:48 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.339 14:28:48 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.339 14:28:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:36.339 [2024-11-20 14:28:48.289553] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:36.339 [2024-11-20 14:28:48.289604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1404834 ] 00:09:36.598 [2024-11-20 14:28:48.346842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:36.598 [2024-11-20 14:28:48.393968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.598 [2024-11-20 14:28:48.393972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.858 14:28:48 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.858 14:28:48 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:09:36.858 14:28:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1404933 00:09:36.858 14:28:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:36.859 14:28:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:36.859 [ 00:09:36.859 "bdev_malloc_delete", 00:09:36.859 "bdev_malloc_create", 00:09:36.859 "bdev_null_resize", 00:09:36.859 "bdev_null_delete", 00:09:36.859 "bdev_null_create", 00:09:36.859 "bdev_nvme_cuse_unregister", 00:09:36.859 "bdev_nvme_cuse_register", 00:09:36.859 "bdev_opal_new_user", 00:09:36.859 "bdev_opal_set_lock_state", 00:09:36.859 "bdev_opal_delete", 00:09:36.859 "bdev_opal_get_info", 00:09:36.859 "bdev_opal_create", 00:09:36.859 "bdev_nvme_opal_revert", 00:09:36.859 "bdev_nvme_opal_init", 00:09:36.859 "bdev_nvme_send_cmd", 00:09:36.859 "bdev_nvme_set_keys", 00:09:36.859 "bdev_nvme_get_path_iostat", 00:09:36.859 "bdev_nvme_get_mdns_discovery_info", 00:09:36.859 "bdev_nvme_stop_mdns_discovery", 00:09:36.859 "bdev_nvme_start_mdns_discovery", 00:09:36.859 "bdev_nvme_set_multipath_policy", 00:09:36.859 "bdev_nvme_set_preferred_path", 00:09:36.859 "bdev_nvme_get_io_paths", 00:09:36.859 "bdev_nvme_remove_error_injection", 00:09:36.859 "bdev_nvme_add_error_injection", 00:09:36.859 "bdev_nvme_get_discovery_info", 00:09:36.859 "bdev_nvme_stop_discovery", 00:09:36.859 "bdev_nvme_start_discovery", 00:09:36.859 "bdev_nvme_get_controller_health_info", 00:09:36.859 "bdev_nvme_disable_controller", 00:09:36.859 "bdev_nvme_enable_controller", 00:09:36.859 "bdev_nvme_reset_controller", 00:09:36.859 "bdev_nvme_get_transport_statistics", 00:09:36.859 "bdev_nvme_apply_firmware", 00:09:36.859 "bdev_nvme_detach_controller", 00:09:36.859 "bdev_nvme_get_controllers", 00:09:36.859 "bdev_nvme_attach_controller", 00:09:36.859 "bdev_nvme_set_hotplug", 00:09:36.859 "bdev_nvme_set_options", 00:09:36.859 "bdev_passthru_delete", 00:09:36.859 "bdev_passthru_create", 00:09:36.859 "bdev_lvol_set_parent_bdev", 00:09:36.859 "bdev_lvol_set_parent", 00:09:36.859 "bdev_lvol_check_shallow_copy", 00:09:36.859 "bdev_lvol_start_shallow_copy", 00:09:36.859 "bdev_lvol_grow_lvstore", 00:09:36.859 "bdev_lvol_get_lvols", 00:09:36.859 "bdev_lvol_get_lvstores", 00:09:36.859 "bdev_lvol_delete", 00:09:36.859 "bdev_lvol_set_read_only", 00:09:36.859 "bdev_lvol_resize", 00:09:36.859 "bdev_lvol_decouple_parent", 00:09:36.859 "bdev_lvol_inflate", 00:09:36.859 "bdev_lvol_rename", 00:09:36.859 "bdev_lvol_clone_bdev", 00:09:36.859 "bdev_lvol_clone", 00:09:36.859 "bdev_lvol_snapshot", 00:09:36.859 "bdev_lvol_create", 00:09:36.859 "bdev_lvol_delete_lvstore", 00:09:36.859 "bdev_lvol_rename_lvstore", 00:09:36.859 "bdev_lvol_create_lvstore", 00:09:36.859 "bdev_raid_set_options", 00:09:36.859 "bdev_raid_remove_base_bdev", 00:09:36.859 "bdev_raid_add_base_bdev", 00:09:36.859 "bdev_raid_delete", 00:09:36.859 "bdev_raid_create", 00:09:36.859 "bdev_raid_get_bdevs", 00:09:36.859 "bdev_error_inject_error", 00:09:36.859 "bdev_error_delete", 00:09:36.859 "bdev_error_create", 00:09:36.859 "bdev_split_delete", 00:09:36.859 "bdev_split_create", 00:09:36.859 "bdev_delay_delete", 00:09:36.859 "bdev_delay_create", 00:09:36.859 "bdev_delay_update_latency", 00:09:36.859 "bdev_zone_block_delete", 00:09:36.859 "bdev_zone_block_create", 00:09:36.859 "blobfs_create", 00:09:36.859 "blobfs_detect", 00:09:36.859 "blobfs_set_cache_size", 00:09:36.859 "bdev_aio_delete", 00:09:36.859 "bdev_aio_rescan", 00:09:36.859 "bdev_aio_create", 00:09:36.859 "bdev_ftl_set_property", 00:09:36.859 "bdev_ftl_get_properties", 00:09:36.859 "bdev_ftl_get_stats", 00:09:36.859 "bdev_ftl_unmap", 00:09:36.859 "bdev_ftl_unload", 00:09:36.859 "bdev_ftl_delete", 00:09:36.859 "bdev_ftl_load", 00:09:36.859 "bdev_ftl_create", 00:09:36.859 "bdev_virtio_attach_controller", 00:09:36.859 "bdev_virtio_scsi_get_devices", 00:09:36.859 "bdev_virtio_detach_controller", 00:09:36.859 "bdev_virtio_blk_set_hotplug", 00:09:36.859 "bdev_iscsi_delete", 00:09:36.859 "bdev_iscsi_create", 00:09:36.859 "bdev_iscsi_set_options", 00:09:36.859 "accel_error_inject_error", 00:09:36.859 "ioat_scan_accel_module", 00:09:36.859 "dsa_scan_accel_module", 00:09:36.859 "iaa_scan_accel_module", 00:09:36.859 "vfu_virtio_create_fs_endpoint", 00:09:36.859 "vfu_virtio_create_scsi_endpoint", 00:09:36.859 "vfu_virtio_scsi_remove_target", 00:09:36.859 "vfu_virtio_scsi_add_target", 00:09:36.859 "vfu_virtio_create_blk_endpoint", 00:09:36.859 "vfu_virtio_delete_endpoint", 00:09:36.859 "keyring_file_remove_key", 00:09:36.859 "keyring_file_add_key", 00:09:36.859 "keyring_linux_set_options", 00:09:36.859 "fsdev_aio_delete", 00:09:36.859 "fsdev_aio_create", 00:09:36.859 "iscsi_get_histogram", 00:09:36.859 "iscsi_enable_histogram", 00:09:36.859 "iscsi_set_options", 00:09:36.859 "iscsi_get_auth_groups", 00:09:36.859 "iscsi_auth_group_remove_secret", 00:09:36.859 "iscsi_auth_group_add_secret", 00:09:36.859 "iscsi_delete_auth_group", 00:09:36.859 "iscsi_create_auth_group", 00:09:36.859 "iscsi_set_discovery_auth", 00:09:36.859 "iscsi_get_options", 00:09:36.859 "iscsi_target_node_request_logout", 00:09:36.859 "iscsi_target_node_set_redirect", 00:09:36.859 "iscsi_target_node_set_auth", 00:09:36.859 "iscsi_target_node_add_lun", 00:09:36.859 "iscsi_get_stats", 00:09:36.859 "iscsi_get_connections", 00:09:36.859 "iscsi_portal_group_set_auth", 00:09:36.859 "iscsi_start_portal_group", 00:09:36.859 "iscsi_delete_portal_group", 00:09:36.859 "iscsi_create_portal_group", 00:09:36.859 "iscsi_get_portal_groups", 00:09:36.859 "iscsi_delete_target_node", 00:09:36.859 "iscsi_target_node_remove_pg_ig_maps", 00:09:36.859 "iscsi_target_node_add_pg_ig_maps", 00:09:36.859 "iscsi_create_target_node", 00:09:36.859 "iscsi_get_target_nodes", 00:09:36.859 "iscsi_delete_initiator_group", 00:09:36.859 "iscsi_initiator_group_remove_initiators", 00:09:36.859 "iscsi_initiator_group_add_initiators", 00:09:36.859 "iscsi_create_initiator_group", 00:09:36.859 "iscsi_get_initiator_groups", 00:09:36.859 "nvmf_set_crdt", 00:09:36.859 "nvmf_set_config", 00:09:36.859 "nvmf_set_max_subsystems", 00:09:36.859 "nvmf_stop_mdns_prr", 00:09:36.859 "nvmf_publish_mdns_prr", 00:09:36.859 "nvmf_subsystem_get_listeners", 00:09:36.859 "nvmf_subsystem_get_qpairs", 00:09:36.859 "nvmf_subsystem_get_controllers", 00:09:36.859 "nvmf_get_stats", 00:09:36.859 "nvmf_get_transports", 00:09:36.859 "nvmf_create_transport", 00:09:36.859 "nvmf_get_targets", 00:09:36.859 "nvmf_delete_target", 00:09:36.859 "nvmf_create_target", 00:09:36.859 "nvmf_subsystem_allow_any_host", 00:09:36.859 "nvmf_subsystem_set_keys", 00:09:36.859 "nvmf_subsystem_remove_host", 00:09:36.859 "nvmf_subsystem_add_host", 00:09:36.859 "nvmf_ns_remove_host", 00:09:36.859 "nvmf_ns_add_host", 00:09:36.859 "nvmf_subsystem_remove_ns", 00:09:36.859 "nvmf_subsystem_set_ns_ana_group", 00:09:36.859 "nvmf_subsystem_add_ns", 00:09:36.859 "nvmf_subsystem_listener_set_ana_state", 00:09:36.859 "nvmf_discovery_get_referrals", 00:09:36.859 "nvmf_discovery_remove_referral", 00:09:36.859 "nvmf_discovery_add_referral", 00:09:36.859 "nvmf_subsystem_remove_listener", 00:09:36.859 "nvmf_subsystem_add_listener", 00:09:36.859 "nvmf_delete_subsystem", 00:09:36.859 "nvmf_create_subsystem", 00:09:36.859 "nvmf_get_subsystems", 00:09:36.859 "env_dpdk_get_mem_stats", 00:09:36.859 "nbd_get_disks", 00:09:36.859 "nbd_stop_disk", 00:09:36.859 "nbd_start_disk", 00:09:36.859 "ublk_recover_disk", 00:09:36.859 "ublk_get_disks", 00:09:36.859 "ublk_stop_disk", 00:09:36.859 "ublk_start_disk", 00:09:36.859 "ublk_destroy_target", 00:09:36.859 "ublk_create_target", 00:09:36.859 "virtio_blk_create_transport", 00:09:36.859 "virtio_blk_get_transports", 00:09:36.859 "vhost_controller_set_coalescing", 00:09:36.859 "vhost_get_controllers", 00:09:36.859 "vhost_delete_controller", 00:09:36.859 "vhost_create_blk_controller", 00:09:36.859 "vhost_scsi_controller_remove_target", 00:09:36.859 "vhost_scsi_controller_add_target", 00:09:36.859 "vhost_start_scsi_controller", 00:09:36.859 "vhost_create_scsi_controller", 00:09:36.859 "thread_set_cpumask", 00:09:36.859 "scheduler_set_options", 00:09:36.859 "framework_get_governor", 00:09:36.859 "framework_get_scheduler", 00:09:36.859 "framework_set_scheduler", 00:09:36.859 "framework_get_reactors", 00:09:36.859 "thread_get_io_channels", 00:09:36.859 "thread_get_pollers", 00:09:36.859 "thread_get_stats", 00:09:36.859 "framework_monitor_context_switch", 00:09:36.859 "spdk_kill_instance", 00:09:36.859 "log_enable_timestamps", 00:09:36.859 "log_get_flags", 00:09:36.859 "log_clear_flag", 00:09:36.859 "log_set_flag", 00:09:36.859 "log_get_level", 00:09:36.859 "log_set_level", 00:09:36.859 "log_get_print_level", 00:09:36.859 "log_set_print_level", 00:09:36.859 "framework_enable_cpumask_locks", 00:09:36.859 "framework_disable_cpumask_locks", 00:09:36.859 "framework_wait_init", 00:09:36.859 "framework_start_init", 00:09:36.859 "scsi_get_devices", 00:09:36.859 "bdev_get_histogram", 00:09:36.859 "bdev_enable_histogram", 00:09:36.859 "bdev_set_qos_limit", 00:09:36.859 "bdev_set_qd_sampling_period", 00:09:36.859 "bdev_get_bdevs", 00:09:36.859 "bdev_reset_iostat", 00:09:36.859 "bdev_get_iostat", 00:09:36.859 "bdev_examine", 00:09:36.859 "bdev_wait_for_examine", 00:09:36.859 "bdev_set_options", 00:09:36.859 "accel_get_stats", 00:09:36.859 "accel_set_options", 00:09:36.859 "accel_set_driver", 00:09:36.859 "accel_crypto_key_destroy", 00:09:36.859 "accel_crypto_keys_get", 00:09:36.859 "accel_crypto_key_create", 00:09:36.860 "accel_assign_opc", 00:09:36.860 "accel_get_module_info", 00:09:36.860 "accel_get_opc_assignments", 00:09:36.860 "vmd_rescan", 00:09:36.860 "vmd_remove_device", 00:09:36.860 "vmd_enable", 00:09:36.860 "sock_get_default_impl", 00:09:36.860 "sock_set_default_impl", 00:09:36.860 "sock_impl_set_options", 00:09:36.860 "sock_impl_get_options", 00:09:36.860 "iobuf_get_stats", 00:09:36.860 "iobuf_set_options", 00:09:36.860 "keyring_get_keys", 00:09:36.860 "vfu_tgt_set_base_path", 00:09:36.860 "framework_get_pci_devices", 00:09:36.860 "framework_get_config", 00:09:36.860 "framework_get_subsystems", 00:09:36.860 "fsdev_set_opts", 00:09:36.860 "fsdev_get_opts", 00:09:36.860 "trace_get_info", 00:09:36.860 "trace_get_tpoint_group_mask", 00:09:36.860 "trace_disable_tpoint_group", 00:09:36.860 "trace_enable_tpoint_group", 00:09:36.860 "trace_clear_tpoint_mask", 00:09:36.860 "trace_set_tpoint_mask", 00:09:36.860 "notify_get_notifications", 00:09:36.860 "notify_get_types", 00:09:36.860 "spdk_get_version", 00:09:36.860 "rpc_get_methods" 00:09:36.860 ] 00:09:36.860 14:28:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:36.860 14:28:48 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:36.860 14:28:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:37.119 14:28:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:37.119 14:28:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1404834 00:09:37.119 14:28:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1404834 ']' 00:09:37.119 14:28:48 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1404834 00:09:37.119 14:28:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:09:37.119 14:28:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.119 14:28:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1404834 00:09:37.119 14:28:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.119 14:28:48 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.119 14:28:48 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1404834' 00:09:37.119 killing process with pid 1404834 00:09:37.119 14:28:48 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1404834 00:09:37.119 14:28:48 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1404834 00:09:37.378 00:09:37.378 real 0m1.139s 00:09:37.378 user 0m1.930s 00:09:37.378 sys 0m0.458s 00:09:37.378 14:28:49 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.378 14:28:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:37.378 ************************************ 00:09:37.378 END TEST spdkcli_tcp 00:09:37.378 ************************************ 00:09:37.378 14:28:49 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:37.378 14:28:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.378 14:28:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.378 14:28:49 -- common/autotest_common.sh@10 -- # set +x 00:09:37.378 ************************************ 00:09:37.378 START TEST dpdk_mem_utility 00:09:37.378 ************************************ 00:09:37.378 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:37.637 * Looking for test storage... 00:09:37.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:09:37.637 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:37.637 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:09:37.637 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:37.637 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.637 14:28:49 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:37.637 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.637 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:37.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.637 --rc genhtml_branch_coverage=1 00:09:37.637 --rc genhtml_function_coverage=1 00:09:37.637 --rc genhtml_legend=1 00:09:37.637 --rc geninfo_all_blocks=1 00:09:37.637 --rc geninfo_unexecuted_blocks=1 00:09:37.637 00:09:37.637 ' 00:09:37.638 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:37.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.638 --rc genhtml_branch_coverage=1 00:09:37.638 --rc genhtml_function_coverage=1 00:09:37.638 --rc genhtml_legend=1 00:09:37.638 --rc geninfo_all_blocks=1 00:09:37.638 --rc geninfo_unexecuted_blocks=1 00:09:37.638 00:09:37.638 ' 00:09:37.638 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:37.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.638 --rc genhtml_branch_coverage=1 00:09:37.638 --rc genhtml_function_coverage=1 00:09:37.638 --rc genhtml_legend=1 00:09:37.638 --rc geninfo_all_blocks=1 00:09:37.638 --rc geninfo_unexecuted_blocks=1 00:09:37.638 00:09:37.638 ' 00:09:37.638 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:37.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.638 --rc genhtml_branch_coverage=1 00:09:37.638 --rc genhtml_function_coverage=1 00:09:37.638 --rc genhtml_legend=1 00:09:37.638 --rc geninfo_all_blocks=1 00:09:37.638 --rc geninfo_unexecuted_blocks=1 00:09:37.638 00:09:37.638 ' 00:09:37.638 14:28:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:37.638 14:28:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1405035 00:09:37.638 14:28:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1405035 00:09:37.638 14:28:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:37.638 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1405035 ']' 00:09:37.638 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.638 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.638 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.638 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.638 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:37.638 [2024-11-20 14:28:49.493301] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:37.638 [2024-11-20 14:28:49.493354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405035 ] 00:09:37.638 [2024-11-20 14:28:49.569762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.898 [2024-11-20 14:28:49.611298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.898 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.898 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:09:37.898 14:28:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:37.898 14:28:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:37.898 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.898 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:37.898 { 00:09:37.898 "filename": "/tmp/spdk_mem_dump.txt" 00:09:37.898 } 00:09:37.898 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.898 14:28:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:38.159 DPDK memory size 818.000000 MiB in 1 heap(s) 00:09:38.159 1 heaps totaling size 818.000000 MiB 00:09:38.159 size: 818.000000 MiB heap id: 0 00:09:38.159 end heaps---------- 00:09:38.159 9 mempools totaling size 603.782043 MiB 00:09:38.159 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:38.159 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:38.159 size: 100.555481 MiB name: bdev_io_1405035 00:09:38.159 size: 50.003479 MiB name: msgpool_1405035 00:09:38.159 size: 36.509338 MiB name: fsdev_io_1405035 00:09:38.159 size: 21.763794 MiB name: PDU_Pool 00:09:38.159 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:38.159 size: 4.133484 MiB name: evtpool_1405035 00:09:38.159 size: 0.026123 MiB name: Session_Pool 00:09:38.159 end mempools------- 00:09:38.159 6 memzones totaling size 4.142822 MiB 00:09:38.159 size: 1.000366 MiB name: RG_ring_0_1405035 00:09:38.159 size: 1.000366 MiB name: RG_ring_1_1405035 00:09:38.159 size: 1.000366 MiB name: RG_ring_4_1405035 00:09:38.159 size: 1.000366 MiB name: RG_ring_5_1405035 00:09:38.159 size: 0.125366 MiB name: RG_ring_2_1405035 00:09:38.159 size: 0.015991 MiB name: RG_ring_3_1405035 00:09:38.159 end memzones------- 00:09:38.159 14:28:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:09:38.159 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:09:38.159 list of free elements. size: 10.852478 MiB 00:09:38.159 element at address: 0x200019200000 with size: 0.999878 MiB 00:09:38.159 element at address: 0x200019400000 with size: 0.999878 MiB 00:09:38.159 element at address: 0x200000400000 with size: 0.998535 MiB 00:09:38.159 element at address: 0x200032000000 with size: 0.994446 MiB 00:09:38.159 element at address: 0x200006400000 with size: 0.959839 MiB 00:09:38.159 element at address: 0x200012c00000 with size: 0.944275 MiB 00:09:38.159 element at address: 0x200019600000 with size: 0.936584 MiB 00:09:38.159 element at address: 0x200000200000 with size: 0.717346 MiB 00:09:38.159 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:09:38.159 element at address: 0x200000c00000 with size: 0.495422 MiB 00:09:38.159 element at address: 0x20000a600000 with size: 0.490723 MiB 00:09:38.159 element at address: 0x200019800000 with size: 0.485657 MiB 00:09:38.159 element at address: 0x200003e00000 with size: 0.481934 MiB 00:09:38.159 element at address: 0x200028200000 with size: 0.410034 MiB 00:09:38.159 element at address: 0x200000800000 with size: 0.355042 MiB 00:09:38.159 list of standard malloc elements. size: 199.218628 MiB 00:09:38.159 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:09:38.159 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:09:38.159 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:38.159 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:09:38.159 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:09:38.159 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:38.159 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:09:38.159 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:38.159 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:09:38.159 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:38.159 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:38.159 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:09:38.159 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:09:38.159 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:09:38.159 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:09:38.159 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:09:38.159 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:09:38.159 element at address: 0x20000085b040 with size: 0.000183 MiB 00:09:38.159 element at address: 0x20000085f300 with size: 0.000183 MiB 00:09:38.159 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:09:38.159 element at address: 0x20000087f680 with size: 0.000183 MiB 00:09:38.159 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:09:38.159 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:09:38.159 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:09:38.159 element at address: 0x200000cff000 with size: 0.000183 MiB 00:09:38.159 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:09:38.159 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:09:38.159 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:09:38.159 element at address: 0x200003efb980 with size: 0.000183 MiB 00:09:38.159 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:09:38.159 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:09:38.159 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:09:38.159 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:09:38.159 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:09:38.159 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:09:38.159 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:09:38.159 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:09:38.159 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:09:38.159 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:09:38.159 element at address: 0x200028268f80 with size: 0.000183 MiB 00:09:38.159 element at address: 0x200028269040 with size: 0.000183 MiB 00:09:38.159 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:09:38.159 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:09:38.159 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:09:38.159 list of memzone associated elements. size: 607.928894 MiB 00:09:38.159 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:09:38.159 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:38.159 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:09:38.159 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:38.159 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:09:38.159 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1405035_0 00:09:38.159 element at address: 0x200000dff380 with size: 48.003052 MiB 00:09:38.159 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1405035_0 00:09:38.159 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:09:38.159 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1405035_0 00:09:38.159 element at address: 0x2000199be940 with size: 20.255554 MiB 00:09:38.159 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:38.159 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:09:38.159 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:38.159 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:09:38.159 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1405035_0 00:09:38.159 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:09:38.159 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1405035 00:09:38.159 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:38.159 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1405035 00:09:38.159 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:09:38.159 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:38.159 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:09:38.159 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:38.159 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:09:38.159 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:38.159 element at address: 0x200003efba40 with size: 1.008118 MiB 00:09:38.159 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:38.159 element at address: 0x200000cff180 with size: 1.000488 MiB 00:09:38.159 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1405035 00:09:38.159 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:09:38.159 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1405035 00:09:38.159 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:09:38.159 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1405035 00:09:38.159 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:09:38.159 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1405035 00:09:38.159 element at address: 0x20000087f740 with size: 0.500488 MiB 00:09:38.159 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1405035 00:09:38.159 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:09:38.159 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1405035 00:09:38.159 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:09:38.159 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:38.159 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:09:38.159 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:38.159 element at address: 0x20001987c540 with size: 0.250488 MiB 00:09:38.159 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:38.159 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:09:38.159 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1405035 00:09:38.159 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:09:38.159 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1405035 00:09:38.160 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:09:38.160 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:38.160 element at address: 0x200028269100 with size: 0.023743 MiB 00:09:38.160 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:38.160 element at address: 0x20000085b100 with size: 0.016113 MiB 00:09:38.160 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1405035 00:09:38.160 element at address: 0x20002826f240 with size: 0.002441 MiB 00:09:38.160 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:38.160 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:09:38.160 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1405035 00:09:38.160 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:09:38.160 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1405035 00:09:38.160 element at address: 0x20000085af00 with size: 0.000305 MiB 00:09:38.160 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1405035 00:09:38.160 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:09:38.160 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:38.160 14:28:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:38.160 14:28:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1405035 00:09:38.160 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1405035 ']' 00:09:38.160 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1405035 00:09:38.160 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:09:38.160 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.160 14:28:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1405035 00:09:38.160 14:28:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.160 14:28:50 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.160 14:28:50 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1405035' 00:09:38.160 killing process with pid 1405035 00:09:38.160 14:28:50 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1405035 00:09:38.160 14:28:50 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1405035 00:09:38.417 00:09:38.417 real 0m1.049s 00:09:38.417 user 0m0.991s 00:09:38.417 sys 0m0.415s 00:09:38.417 14:28:50 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.417 14:28:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:38.417 ************************************ 00:09:38.417 END TEST dpdk_mem_utility 00:09:38.417 ************************************ 00:09:38.417 14:28:50 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:38.417 14:28:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.417 14:28:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.417 14:28:50 -- common/autotest_common.sh@10 -- # set +x 00:09:38.676 ************************************ 00:09:38.676 START TEST event 00:09:38.676 ************************************ 00:09:38.676 14:28:50 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:38.676 * Looking for test storage... 00:09:38.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:38.676 14:28:50 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:38.676 14:28:50 event -- common/autotest_common.sh@1693 -- # lcov --version 00:09:38.676 14:28:50 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:38.676 14:28:50 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:38.676 14:28:50 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.676 14:28:50 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.676 14:28:50 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.676 14:28:50 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.676 14:28:50 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.676 14:28:50 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.676 14:28:50 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.676 14:28:50 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.676 14:28:50 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.676 14:28:50 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.676 14:28:50 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.676 14:28:50 event -- scripts/common.sh@344 -- # case "$op" in 00:09:38.676 14:28:50 event -- scripts/common.sh@345 -- # : 1 00:09:38.676 14:28:50 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.676 14:28:50 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.676 14:28:50 event -- scripts/common.sh@365 -- # decimal 1 00:09:38.676 14:28:50 event -- scripts/common.sh@353 -- # local d=1 00:09:38.676 14:28:50 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.676 14:28:50 event -- scripts/common.sh@355 -- # echo 1 00:09:38.676 14:28:50 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.676 14:28:50 event -- scripts/common.sh@366 -- # decimal 2 00:09:38.676 14:28:50 event -- scripts/common.sh@353 -- # local d=2 00:09:38.676 14:28:50 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.676 14:28:50 event -- scripts/common.sh@355 -- # echo 2 00:09:38.676 14:28:50 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.676 14:28:50 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.676 14:28:50 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.676 14:28:50 event -- scripts/common.sh@368 -- # return 0 00:09:38.676 14:28:50 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.676 14:28:50 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:38.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.676 --rc genhtml_branch_coverage=1 00:09:38.676 --rc genhtml_function_coverage=1 00:09:38.676 --rc genhtml_legend=1 00:09:38.676 --rc geninfo_all_blocks=1 00:09:38.676 --rc geninfo_unexecuted_blocks=1 00:09:38.676 00:09:38.676 ' 00:09:38.676 14:28:50 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:38.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.676 --rc genhtml_branch_coverage=1 00:09:38.676 --rc genhtml_function_coverage=1 00:09:38.676 --rc genhtml_legend=1 00:09:38.676 --rc geninfo_all_blocks=1 00:09:38.676 --rc geninfo_unexecuted_blocks=1 00:09:38.676 00:09:38.676 ' 00:09:38.676 14:28:50 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:38.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.676 --rc genhtml_branch_coverage=1 00:09:38.676 --rc genhtml_function_coverage=1 00:09:38.676 --rc genhtml_legend=1 00:09:38.676 --rc geninfo_all_blocks=1 00:09:38.676 --rc geninfo_unexecuted_blocks=1 00:09:38.677 00:09:38.677 ' 00:09:38.677 14:28:50 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:38.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.677 --rc genhtml_branch_coverage=1 00:09:38.677 --rc genhtml_function_coverage=1 00:09:38.677 --rc genhtml_legend=1 00:09:38.677 --rc geninfo_all_blocks=1 00:09:38.677 --rc geninfo_unexecuted_blocks=1 00:09:38.677 00:09:38.677 ' 00:09:38.677 14:28:50 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:38.677 14:28:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:38.677 14:28:50 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:38.677 14:28:50 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:38.677 14:28:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.677 14:28:50 event -- common/autotest_common.sh@10 -- # set +x 00:09:38.677 ************************************ 00:09:38.677 START TEST event_perf 00:09:38.677 ************************************ 00:09:38.677 14:28:50 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:38.677 Running I/O for 1 seconds...[2024-11-20 14:28:50.614159] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:38.677 [2024-11-20 14:28:50.614228] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405312 ] 00:09:38.936 [2024-11-20 14:28:50.692620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.936 [2024-11-20 14:28:50.737272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.937 [2024-11-20 14:28:50.737378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.937 [2024-11-20 14:28:50.737486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.937 [2024-11-20 14:28:50.737487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.874 Running I/O for 1 seconds... 00:09:39.874 lcore 0: 205363 00:09:39.874 lcore 1: 205362 00:09:39.874 lcore 2: 205362 00:09:39.874 lcore 3: 205362 00:09:39.874 done. 00:09:39.874 00:09:39.874 real 0m1.184s 00:09:39.874 user 0m4.092s 00:09:39.874 sys 0m0.088s 00:09:39.874 14:28:51 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.874 14:28:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:39.874 ************************************ 00:09:39.874 END TEST event_perf 00:09:39.874 ************************************ 00:09:39.874 14:28:51 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:39.874 14:28:51 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:39.874 14:28:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.874 14:28:51 event -- common/autotest_common.sh@10 -- # set +x 00:09:40.133 ************************************ 00:09:40.133 START TEST event_reactor 00:09:40.133 ************************************ 00:09:40.133 14:28:51 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:40.133 [2024-11-20 14:28:51.869653] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:40.133 [2024-11-20 14:28:51.869722] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405562 ] 00:09:40.133 [2024-11-20 14:28:51.946858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.133 [2024-11-20 14:28:51.987186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.068 test_start 00:09:41.068 oneshot 00:09:41.068 tick 100 00:09:41.068 tick 100 00:09:41.068 tick 250 00:09:41.068 tick 100 00:09:41.068 tick 100 00:09:41.068 tick 250 00:09:41.068 tick 100 00:09:41.068 tick 500 00:09:41.068 tick 100 00:09:41.068 tick 100 00:09:41.068 tick 250 00:09:41.068 tick 100 00:09:41.068 tick 100 00:09:41.068 test_end 00:09:41.068 00:09:41.068 real 0m1.175s 00:09:41.068 user 0m1.098s 00:09:41.068 sys 0m0.073s 00:09:41.068 14:28:53 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.068 14:28:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:41.068 ************************************ 00:09:41.068 END TEST event_reactor 00:09:41.068 ************************************ 00:09:41.328 14:28:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:41.328 14:28:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:41.328 14:28:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.328 14:28:53 event -- common/autotest_common.sh@10 -- # set +x 00:09:41.328 ************************************ 00:09:41.328 START TEST event_reactor_perf 00:09:41.328 ************************************ 00:09:41.328 14:28:53 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:41.328 [2024-11-20 14:28:53.112504] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:41.328 [2024-11-20 14:28:53.112576] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405814 ] 00:09:41.328 [2024-11-20 14:28:53.189618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.328 [2024-11-20 14:28:53.230189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.706 test_start 00:09:42.706 test_end 00:09:42.706 Performance: 503744 events per second 00:09:42.706 00:09:42.706 real 0m1.177s 00:09:42.706 user 0m1.105s 00:09:42.706 sys 0m0.068s 00:09:42.706 14:28:54 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.706 14:28:54 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:42.706 ************************************ 00:09:42.706 END TEST event_reactor_perf 00:09:42.706 ************************************ 00:09:42.706 14:28:54 event -- event/event.sh@49 -- # uname -s 00:09:42.706 14:28:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:42.706 14:28:54 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:42.706 14:28:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.706 14:28:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.706 14:28:54 event -- common/autotest_common.sh@10 -- # set +x 00:09:42.706 ************************************ 00:09:42.706 START TEST event_scheduler 00:09:42.706 ************************************ 00:09:42.706 14:28:54 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:42.706 * Looking for test storage... 00:09:42.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:09:42.706 14:28:54 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:42.706 14:28:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:09:42.706 14:28:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:42.706 14:28:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.706 14:28:54 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:42.706 14:28:54 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.706 14:28:54 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:42.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.706 --rc genhtml_branch_coverage=1 00:09:42.706 --rc genhtml_function_coverage=1 00:09:42.706 --rc genhtml_legend=1 00:09:42.706 --rc geninfo_all_blocks=1 00:09:42.706 --rc geninfo_unexecuted_blocks=1 00:09:42.706 00:09:42.706 ' 00:09:42.706 14:28:54 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:42.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.706 --rc genhtml_branch_coverage=1 00:09:42.706 --rc genhtml_function_coverage=1 00:09:42.706 --rc genhtml_legend=1 00:09:42.706 --rc geninfo_all_blocks=1 00:09:42.706 --rc geninfo_unexecuted_blocks=1 00:09:42.706 00:09:42.706 ' 00:09:42.707 14:28:54 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:42.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.707 --rc genhtml_branch_coverage=1 00:09:42.707 --rc genhtml_function_coverage=1 00:09:42.707 --rc genhtml_legend=1 00:09:42.707 --rc geninfo_all_blocks=1 00:09:42.707 --rc geninfo_unexecuted_blocks=1 00:09:42.707 00:09:42.707 ' 00:09:42.707 14:28:54 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:42.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.707 --rc genhtml_branch_coverage=1 00:09:42.707 --rc genhtml_function_coverage=1 00:09:42.707 --rc genhtml_legend=1 00:09:42.707 --rc geninfo_all_blocks=1 00:09:42.707 --rc geninfo_unexecuted_blocks=1 00:09:42.707 00:09:42.707 ' 00:09:42.707 14:28:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:42.707 14:28:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1406096 00:09:42.707 14:28:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:42.707 14:28:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:42.707 14:28:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1406096 00:09:42.707 14:28:54 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1406096 ']' 00:09:42.707 14:28:54 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.707 14:28:54 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.707 14:28:54 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.707 14:28:54 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.707 14:28:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:42.707 [2024-11-20 14:28:54.561638] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:42.707 [2024-11-20 14:28:54.561687] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1406096 ] 00:09:42.707 [2024-11-20 14:28:54.635364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.966 [2024-11-20 14:28:54.680038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.966 [2024-11-20 14:28:54.680143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.966 [2024-11-20 14:28:54.680252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.966 [2024-11-20 14:28:54.680252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.966 14:28:54 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.966 14:28:54 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:09:42.966 14:28:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:42.966 14:28:54 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.966 14:28:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:42.966 [2024-11-20 14:28:54.724753] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:09:42.966 [2024-11-20 14:28:54.724770] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:42.966 [2024-11-20 14:28:54.724780] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:42.966 [2024-11-20 14:28:54.724785] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:42.966 [2024-11-20 14:28:54.724790] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:42.966 14:28:54 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.966 14:28:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:42.966 14:28:54 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.966 14:28:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:42.966 [2024-11-20 14:28:54.803504] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:42.966 14:28:54 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.966 14:28:54 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:42.966 14:28:54 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.966 14:28:54 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.967 14:28:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:42.967 ************************************ 00:09:42.967 START TEST scheduler_create_thread 00:09:42.967 ************************************ 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.967 2 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.967 3 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.967 4 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.967 5 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.967 6 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.967 7 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.967 8 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.967 9 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.967 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:43.225 10 00:09:43.225 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.225 14:28:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:43.225 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.225 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:43.225 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.225 14:28:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:43.225 14:28:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:43.225 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.225 14:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.161 14:28:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.161 14:28:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:44.161 14:28:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.161 14:28:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:45.539 14:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.539 14:28:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:45.539 14:28:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:45.539 14:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.539 14:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:46.475 14:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.475 00:09:46.475 real 0m3.379s 00:09:46.475 user 0m0.022s 00:09:46.475 sys 0m0.006s 00:09:46.475 14:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.475 14:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:46.475 ************************************ 00:09:46.475 END TEST scheduler_create_thread 00:09:46.475 ************************************ 00:09:46.475 14:28:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:46.475 14:28:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1406096 00:09:46.475 14:28:58 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1406096 ']' 00:09:46.475 14:28:58 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1406096 00:09:46.475 14:28:58 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:09:46.475 14:28:58 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.475 14:28:58 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1406096 00:09:46.475 14:28:58 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:46.475 14:28:58 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:46.475 14:28:58 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1406096' 00:09:46.475 killing process with pid 1406096 00:09:46.475 14:28:58 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1406096 00:09:46.475 14:28:58 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1406096 00:09:46.735 [2024-11-20 14:28:58.599604] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:46.994 00:09:46.994 real 0m4.466s 00:09:46.994 user 0m7.814s 00:09:46.994 sys 0m0.376s 00:09:46.994 14:28:58 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.994 14:28:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:46.994 ************************************ 00:09:46.994 END TEST event_scheduler 00:09:46.994 ************************************ 00:09:46.994 14:28:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:46.994 14:28:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:46.994 14:28:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:46.994 14:28:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.994 14:28:58 event -- common/autotest_common.sh@10 -- # set +x 00:09:46.994 ************************************ 00:09:46.994 START TEST app_repeat 00:09:46.994 ************************************ 00:09:46.994 14:28:58 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:09:46.994 14:28:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.994 14:28:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.994 14:28:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:46.994 14:28:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:46.994 14:28:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:46.994 14:28:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:46.994 14:28:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:46.994 14:28:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1406840 00:09:46.994 14:28:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:46.994 14:28:58 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:46.994 14:28:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1406840' 00:09:46.994 Process app_repeat pid: 1406840 00:09:46.994 14:28:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:46.994 14:28:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:46.994 spdk_app_start Round 0 00:09:46.994 14:28:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1406840 /var/tmp/spdk-nbd.sock 00:09:46.994 14:28:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1406840 ']' 00:09:46.994 14:28:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:46.994 14:28:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.994 14:28:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:46.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:46.995 14:28:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.995 14:28:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:46.995 [2024-11-20 14:28:58.925251] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:09:46.995 [2024-11-20 14:28:58.925305] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1406840 ] 00:09:47.253 [2024-11-20 14:28:59.001671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:47.253 [2024-11-20 14:28:59.046552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.253 [2024-11-20 14:28:59.046553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.253 14:28:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.253 14:28:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:47.253 14:28:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:47.511 Malloc0 00:09:47.511 14:28:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:47.770 Malloc1 00:09:47.770 14:28:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:47.770 14:28:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.770 14:28:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:47.770 14:28:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:47.770 14:28:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:47.770 14:28:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:47.770 14:28:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:47.770 14:28:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.771 14:28:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:47.771 14:28:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:47.771 14:28:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:47.771 14:28:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:47.771 14:28:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:47.771 14:28:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:47.771 14:28:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:47.771 14:28:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:48.029 /dev/nbd0 00:09:48.029 14:28:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:48.029 14:28:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:48.029 14:28:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:48.029 14:28:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:48.029 14:28:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:48.029 14:28:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:48.029 14:28:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:48.029 14:28:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:48.029 14:28:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:48.029 14:28:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:48.029 14:28:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:48.030 1+0 records in 00:09:48.030 1+0 records out 00:09:48.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018441 s, 22.2 MB/s 00:09:48.030 14:28:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:48.030 14:28:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:48.030 14:28:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:48.030 14:28:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:48.030 14:28:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:48.030 14:28:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:48.030 14:28:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:48.030 14:28:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:48.289 /dev/nbd1 00:09:48.289 14:29:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:48.289 14:29:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:48.289 14:29:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:48.289 14:29:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:48.289 14:29:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:48.289 14:29:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:48.289 14:29:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:48.289 14:29:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:48.289 14:29:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:48.289 14:29:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:48.289 14:29:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:48.289 1+0 records in 00:09:48.289 1+0 records out 00:09:48.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266499 s, 15.4 MB/s 00:09:48.289 14:29:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:48.289 14:29:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:48.289 14:29:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:48.289 14:29:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:48.289 14:29:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:48.289 14:29:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:48.289 14:29:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:48.289 14:29:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:48.289 14:29:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.289 14:29:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:48.289 14:29:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:48.289 { 00:09:48.289 "nbd_device": "/dev/nbd0", 00:09:48.289 "bdev_name": "Malloc0" 00:09:48.289 }, 00:09:48.289 { 00:09:48.289 "nbd_device": "/dev/nbd1", 00:09:48.289 "bdev_name": "Malloc1" 00:09:48.289 } 00:09:48.289 ]' 00:09:48.289 14:29:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:48.289 { 00:09:48.289 "nbd_device": "/dev/nbd0", 00:09:48.289 "bdev_name": "Malloc0" 00:09:48.289 }, 00:09:48.289 { 00:09:48.289 "nbd_device": "/dev/nbd1", 00:09:48.289 "bdev_name": "Malloc1" 00:09:48.289 } 00:09:48.289 ]' 00:09:48.289 14:29:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:48.548 /dev/nbd1' 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:48.548 /dev/nbd1' 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:48.548 256+0 records in 00:09:48.548 256+0 records out 00:09:48.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100848 s, 104 MB/s 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:48.548 256+0 records in 00:09:48.548 256+0 records out 00:09:48.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149773 s, 70.0 MB/s 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:48.548 256+0 records in 00:09:48.548 256+0 records out 00:09:48.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158432 s, 66.2 MB/s 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:48.548 14:29:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:48.549 14:29:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:48.549 14:29:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:48.549 14:29:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.549 14:29:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:48.549 14:29:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:48.549 14:29:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:48.549 14:29:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:48.549 14:29:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:48.808 14:29:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:48.808 14:29:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:48.808 14:29:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:48.808 14:29:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:48.808 14:29:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:48.808 14:29:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:48.808 14:29:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:48.808 14:29:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:48.808 14:29:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:48.808 14:29:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:49.066 14:29:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:49.066 14:29:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:49.066 14:29:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:49.066 14:29:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:49.066 14:29:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.066 14:29:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:49.066 14:29:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:49.066 14:29:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:49.066 14:29:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:49.066 14:29:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.066 14:29:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:49.066 14:29:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:49.066 14:29:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:49.066 14:29:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:49.325 14:29:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:49.325 14:29:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:49.325 14:29:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:49.325 14:29:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:49.325 14:29:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:49.325 14:29:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:49.325 14:29:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:49.325 14:29:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:49.325 14:29:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:49.325 14:29:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:49.326 14:29:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:49.585 [2024-11-20 14:29:01.417500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:49.585 [2024-11-20 14:29:01.454735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.585 [2024-11-20 14:29:01.454736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.585 [2024-11-20 14:29:01.495625] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:49.585 [2024-11-20 14:29:01.495679] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:52.871 14:29:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:52.871 14:29:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:52.871 spdk_app_start Round 1 00:09:52.871 14:29:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1406840 /var/tmp/spdk-nbd.sock 00:09:52.871 14:29:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1406840 ']' 00:09:52.871 14:29:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:52.871 14:29:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.871 14:29:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:52.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:52.871 14:29:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.871 14:29:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:52.871 14:29:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.871 14:29:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:52.871 14:29:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:52.871 Malloc0 00:09:52.871 14:29:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:53.128 Malloc1 00:09:53.128 14:29:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:53.128 14:29:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.128 14:29:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:53.128 14:29:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:53.128 14:29:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:53.128 14:29:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:53.128 14:29:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:53.128 14:29:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.128 14:29:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:53.128 14:29:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:53.128 14:29:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:53.128 14:29:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:53.128 14:29:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:53.128 14:29:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:53.128 14:29:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:53.129 14:29:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:53.387 /dev/nbd0 00:09:53.387 14:29:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:53.387 14:29:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:53.387 14:29:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:53.387 14:29:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:53.387 14:29:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:53.387 14:29:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:53.387 14:29:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:53.387 14:29:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:53.387 14:29:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:53.387 14:29:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:53.387 14:29:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:53.387 1+0 records in 00:09:53.387 1+0 records out 00:09:53.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187602 s, 21.8 MB/s 00:09:53.387 14:29:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:53.387 14:29:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:53.387 14:29:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:53.387 14:29:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:53.387 14:29:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:53.387 14:29:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:53.387 14:29:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:53.387 14:29:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:53.646 /dev/nbd1 00:09:53.646 14:29:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:53.646 14:29:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:53.646 14:29:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:53.646 14:29:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:53.646 14:29:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:53.646 14:29:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:53.646 14:29:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:53.646 14:29:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:53.646 14:29:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:53.646 14:29:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:53.646 14:29:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:53.646 1+0 records in 00:09:53.646 1+0 records out 00:09:53.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000156955 s, 26.1 MB/s 00:09:53.646 14:29:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:53.646 14:29:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:53.646 14:29:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:53.646 14:29:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:53.646 14:29:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:53.646 14:29:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:53.646 14:29:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:53.646 14:29:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:53.646 14:29:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.646 14:29:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:53.905 { 00:09:53.905 "nbd_device": "/dev/nbd0", 00:09:53.905 "bdev_name": "Malloc0" 00:09:53.905 }, 00:09:53.905 { 00:09:53.905 "nbd_device": "/dev/nbd1", 00:09:53.905 "bdev_name": "Malloc1" 00:09:53.905 } 00:09:53.905 ]' 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:53.905 { 00:09:53.905 "nbd_device": "/dev/nbd0", 00:09:53.905 "bdev_name": "Malloc0" 00:09:53.905 }, 00:09:53.905 { 00:09:53.905 "nbd_device": "/dev/nbd1", 00:09:53.905 "bdev_name": "Malloc1" 00:09:53.905 } 00:09:53.905 ]' 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:53.905 /dev/nbd1' 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:53.905 /dev/nbd1' 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:53.905 256+0 records in 00:09:53.905 256+0 records out 00:09:53.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00988339 s, 106 MB/s 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:53.905 256+0 records in 00:09:53.905 256+0 records out 00:09:53.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140717 s, 74.5 MB/s 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:53.905 14:29:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:53.905 256+0 records in 00:09:53.906 256+0 records out 00:09:53.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154139 s, 68.0 MB/s 00:09:53.906 14:29:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:53.906 14:29:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:53.906 14:29:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:53.906 14:29:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:53.906 14:29:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:53.906 14:29:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:53.906 14:29:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:53.906 14:29:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:53.906 14:29:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:53.906 14:29:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:53.906 14:29:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:53.906 14:29:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:53.906 14:29:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:53.906 14:29:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.906 14:29:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:53.906 14:29:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:53.906 14:29:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:53.906 14:29:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.906 14:29:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:54.165 14:29:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:54.165 14:29:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:54.165 14:29:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:54.165 14:29:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.165 14:29:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.165 14:29:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:54.165 14:29:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:54.165 14:29:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.165 14:29:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.165 14:29:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:54.423 14:29:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:54.423 14:29:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:54.423 14:29:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:54.423 14:29:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.423 14:29:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.423 14:29:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:54.423 14:29:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:54.423 14:29:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.423 14:29:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:54.423 14:29:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.424 14:29:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:54.424 14:29:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:54.424 14:29:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:54.424 14:29:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:54.683 14:29:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:54.683 14:29:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:54.683 14:29:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:54.683 14:29:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:54.683 14:29:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:54.683 14:29:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:54.683 14:29:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:54.683 14:29:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:54.683 14:29:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:54.683 14:29:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:54.683 14:29:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:54.942 [2024-11-20 14:29:06.775282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:54.942 [2024-11-20 14:29:06.813193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.942 [2024-11-20 14:29:06.813194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.942 [2024-11-20 14:29:06.855063] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:54.942 [2024-11-20 14:29:06.855105] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:58.227 14:29:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:58.227 14:29:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:58.227 spdk_app_start Round 2 00:09:58.227 14:29:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1406840 /var/tmp/spdk-nbd.sock 00:09:58.227 14:29:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1406840 ']' 00:09:58.227 14:29:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:58.227 14:29:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.227 14:29:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:58.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:58.227 14:29:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.227 14:29:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:58.227 14:29:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.227 14:29:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:58.227 14:29:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:58.227 Malloc0 00:09:58.227 14:29:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:58.486 Malloc1 00:09:58.486 14:29:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:58.486 14:29:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.486 14:29:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:58.486 14:29:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:58.486 14:29:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.486 14:29:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:58.486 14:29:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:58.486 14:29:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.486 14:29:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:58.486 14:29:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:58.486 14:29:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.486 14:29:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:58.486 14:29:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:58.486 14:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:58.486 14:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:58.486 14:29:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:58.744 /dev/nbd0 00:09:58.744 14:29:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:58.744 14:29:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:58.744 14:29:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:58.744 14:29:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:58.744 14:29:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:58.744 14:29:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:58.744 14:29:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:58.744 14:29:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:58.744 14:29:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:58.744 14:29:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:58.744 14:29:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:58.744 1+0 records in 00:09:58.744 1+0 records out 00:09:58.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000141445 s, 29.0 MB/s 00:09:58.744 14:29:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:58.744 14:29:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:58.744 14:29:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:58.744 14:29:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:58.744 14:29:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:58.744 14:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:58.744 14:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:58.744 14:29:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:59.004 /dev/nbd1 00:09:59.004 14:29:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:59.004 14:29:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:59.004 14:29:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:59.004 14:29:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:59.004 14:29:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:59.004 14:29:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:59.004 14:29:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:59.004 14:29:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:59.004 14:29:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:59.004 14:29:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:59.004 14:29:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:59.004 1+0 records in 00:09:59.004 1+0 records out 00:09:59.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230769 s, 17.7 MB/s 00:09:59.004 14:29:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:59.004 14:29:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:59.004 14:29:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:59.004 14:29:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:59.004 14:29:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:59.004 14:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:59.004 14:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:59.004 14:29:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:59.004 14:29:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.004 14:29:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:59.004 14:29:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:59.004 { 00:09:59.004 "nbd_device": "/dev/nbd0", 00:09:59.004 "bdev_name": "Malloc0" 00:09:59.004 }, 00:09:59.004 { 00:09:59.004 "nbd_device": "/dev/nbd1", 00:09:59.004 "bdev_name": "Malloc1" 00:09:59.004 } 00:09:59.004 ]' 00:09:59.263 14:29:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:59.263 { 00:09:59.263 "nbd_device": "/dev/nbd0", 00:09:59.263 "bdev_name": "Malloc0" 00:09:59.263 }, 00:09:59.263 { 00:09:59.263 "nbd_device": "/dev/nbd1", 00:09:59.263 "bdev_name": "Malloc1" 00:09:59.263 } 00:09:59.263 ]' 00:09:59.263 14:29:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:59.263 14:29:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:59.263 /dev/nbd1' 00:09:59.263 14:29:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:59.263 /dev/nbd1' 00:09:59.263 14:29:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:59.263 256+0 records in 00:09:59.263 256+0 records out 00:09:59.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106669 s, 98.3 MB/s 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:59.263 256+0 records in 00:09:59.263 256+0 records out 00:09:59.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150462 s, 69.7 MB/s 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:59.263 256+0 records in 00:09:59.263 256+0 records out 00:09:59.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01497 s, 70.0 MB/s 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:59.263 14:29:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.264 14:29:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:59.264 14:29:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:59.264 14:29:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:59.264 14:29:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:59.264 14:29:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:59.521 14:29:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:59.521 14:29:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:59.521 14:29:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:59.521 14:29:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:59.521 14:29:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:59.521 14:29:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:59.521 14:29:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:59.521 14:29:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:59.521 14:29:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:59.521 14:29:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:59.780 14:29:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:59.780 14:29:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:59.780 14:29:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:59.780 14:29:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:59.780 14:29:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:59.780 14:29:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:59.780 14:29:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:59.780 14:29:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:59.780 14:29:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:59.780 14:29:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.780 14:29:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:59.780 14:29:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:59.780 14:29:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:59.780 14:29:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:00.038 14:29:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:00.038 14:29:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:00.038 14:29:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:00.038 14:29:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:00.038 14:29:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:00.038 14:29:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:00.038 14:29:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:00.038 14:29:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:00.038 14:29:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:00.038 14:29:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:00.038 14:29:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:00.297 [2024-11-20 14:29:12.103005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:00.297 [2024-11-20 14:29:12.140470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.297 [2024-11-20 14:29:12.140471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.297 [2024-11-20 14:29:12.181580] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:00.297 [2024-11-20 14:29:12.181618] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:03.585 14:29:14 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1406840 /var/tmp/spdk-nbd.sock 00:10:03.585 14:29:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1406840 ']' 00:10:03.585 14:29:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:03.585 14:29:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.585 14:29:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:03.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:03.585 14:29:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.585 14:29:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:03.585 14:29:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.585 14:29:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:03.585 14:29:15 event.app_repeat -- event/event.sh@39 -- # killprocess 1406840 00:10:03.585 14:29:15 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1406840 ']' 00:10:03.586 14:29:15 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1406840 00:10:03.586 14:29:15 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:10:03.586 14:29:15 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.586 14:29:15 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1406840 00:10:03.586 14:29:15 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.586 14:29:15 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.586 14:29:15 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1406840' 00:10:03.586 killing process with pid 1406840 00:10:03.586 14:29:15 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1406840 00:10:03.586 14:29:15 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1406840 00:10:03.586 spdk_app_start is called in Round 0. 00:10:03.586 Shutdown signal received, stop current app iteration 00:10:03.586 Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 reinitialization... 00:10:03.586 spdk_app_start is called in Round 1. 00:10:03.586 Shutdown signal received, stop current app iteration 00:10:03.586 Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 reinitialization... 00:10:03.586 spdk_app_start is called in Round 2. 00:10:03.586 Shutdown signal received, stop current app iteration 00:10:03.586 Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 reinitialization... 00:10:03.586 spdk_app_start is called in Round 3. 00:10:03.586 Shutdown signal received, stop current app iteration 00:10:03.586 14:29:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:03.586 14:29:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:03.586 00:10:03.586 real 0m16.460s 00:10:03.586 user 0m36.223s 00:10:03.586 sys 0m2.581s 00:10:03.586 14:29:15 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.586 14:29:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:03.586 ************************************ 00:10:03.586 END TEST app_repeat 00:10:03.586 ************************************ 00:10:03.586 14:29:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:03.586 14:29:15 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:10:03.586 14:29:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:03.586 14:29:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.586 14:29:15 event -- common/autotest_common.sh@10 -- # set +x 00:10:03.586 ************************************ 00:10:03.586 START TEST cpu_locks 00:10:03.586 ************************************ 00:10:03.586 14:29:15 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:10:03.586 * Looking for test storage... 00:10:03.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:10:03.586 14:29:15 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:03.586 14:29:15 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:10:03.586 14:29:15 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:03.844 14:29:15 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.844 14:29:15 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:10:03.844 14:29:15 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.844 14:29:15 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:03.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.844 --rc genhtml_branch_coverage=1 00:10:03.844 --rc genhtml_function_coverage=1 00:10:03.844 --rc genhtml_legend=1 00:10:03.844 --rc geninfo_all_blocks=1 00:10:03.844 --rc geninfo_unexecuted_blocks=1 00:10:03.844 00:10:03.844 ' 00:10:03.844 14:29:15 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:03.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.844 --rc genhtml_branch_coverage=1 00:10:03.844 --rc genhtml_function_coverage=1 00:10:03.844 --rc genhtml_legend=1 00:10:03.844 --rc geninfo_all_blocks=1 00:10:03.844 --rc geninfo_unexecuted_blocks=1 00:10:03.844 00:10:03.844 ' 00:10:03.844 14:29:15 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:03.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.844 --rc genhtml_branch_coverage=1 00:10:03.844 --rc genhtml_function_coverage=1 00:10:03.844 --rc genhtml_legend=1 00:10:03.844 --rc geninfo_all_blocks=1 00:10:03.844 --rc geninfo_unexecuted_blocks=1 00:10:03.844 00:10:03.844 ' 00:10:03.844 14:29:15 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:03.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.844 --rc genhtml_branch_coverage=1 00:10:03.844 --rc genhtml_function_coverage=1 00:10:03.844 --rc genhtml_legend=1 00:10:03.844 --rc geninfo_all_blocks=1 00:10:03.845 --rc geninfo_unexecuted_blocks=1 00:10:03.845 00:10:03.845 ' 00:10:03.845 14:29:15 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:03.845 14:29:15 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:03.845 14:29:15 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:03.845 14:29:15 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:03.845 14:29:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:03.845 14:29:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.845 14:29:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:03.845 ************************************ 00:10:03.845 START TEST default_locks 00:10:03.845 ************************************ 00:10:03.845 14:29:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:10:03.845 14:29:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1409963 00:10:03.845 14:29:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1409963 00:10:03.845 14:29:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:03.845 14:29:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1409963 ']' 00:10:03.845 14:29:15 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.845 14:29:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.845 14:29:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.845 14:29:15 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.845 14:29:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:03.845 [2024-11-20 14:29:15.680751] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:03.845 [2024-11-20 14:29:15.680796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409963 ] 00:10:03.845 [2024-11-20 14:29:15.756604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.845 [2024-11-20 14:29:15.798996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.103 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.103 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:10:04.103 14:29:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1409963 00:10:04.103 14:29:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1409963 00:10:04.103 14:29:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:04.362 lslocks: write error 00:10:04.622 14:29:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1409963 00:10:04.622 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1409963 ']' 00:10:04.622 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1409963 00:10:04.622 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:10:04.622 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.622 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1409963 00:10:04.622 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.622 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.622 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1409963' 00:10:04.622 killing process with pid 1409963 00:10:04.622 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1409963 00:10:04.622 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1409963 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1409963 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1409963 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1409963 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1409963 ']' 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:04.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1409963) - No such process 00:10:04.882 ERROR: process (pid: 1409963) is no longer running 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:04.882 00:10:04.882 real 0m1.054s 00:10:04.882 user 0m0.999s 00:10:04.882 sys 0m0.498s 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.882 14:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:04.882 ************************************ 00:10:04.882 END TEST default_locks 00:10:04.882 ************************************ 00:10:04.882 14:29:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:04.882 14:29:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.883 14:29:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.883 14:29:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:04.883 ************************************ 00:10:04.883 START TEST default_locks_via_rpc 00:10:04.883 ************************************ 00:10:04.883 14:29:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:10:04.883 14:29:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1410098 00:10:04.883 14:29:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1410098 00:10:04.883 14:29:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:04.883 14:29:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1410098 ']' 00:10:04.883 14:29:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.883 14:29:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.883 14:29:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.883 14:29:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.883 14:29:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.883 [2024-11-20 14:29:16.803942] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:04.883 [2024-11-20 14:29:16.804015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410098 ] 00:10:05.142 [2024-11-20 14:29:16.880838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.142 [2024-11-20 14:29:16.923277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.400 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.401 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:05.401 14:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:05.401 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.401 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.401 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.401 14:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:05.401 14:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:05.401 14:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:05.401 14:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:05.401 14:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:05.401 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.401 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.401 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.401 14:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1410098 00:10:05.401 14:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1410098 00:10:05.401 14:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:05.660 14:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1410098 00:10:05.660 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1410098 ']' 00:10:05.660 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1410098 00:10:05.660 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:10:05.660 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.660 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1410098 00:10:05.660 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.660 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.660 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1410098' 00:10:05.660 killing process with pid 1410098 00:10:05.660 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1410098 00:10:05.660 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1410098 00:10:05.919 00:10:05.919 real 0m0.969s 00:10:05.919 user 0m0.909s 00:10:05.919 sys 0m0.449s 00:10:05.919 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.919 14:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.919 ************************************ 00:10:05.919 END TEST default_locks_via_rpc 00:10:05.919 ************************************ 00:10:05.919 14:29:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:05.919 14:29:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:05.919 14:29:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.919 14:29:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:05.919 ************************************ 00:10:05.919 START TEST non_locking_app_on_locked_coremask 00:10:05.919 ************************************ 00:10:05.919 14:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:10:05.919 14:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1410354 00:10:05.919 14:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1410354 /var/tmp/spdk.sock 00:10:05.919 14:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:05.919 14:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1410354 ']' 00:10:05.919 14:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.919 14:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.919 14:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.919 14:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.919 14:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:05.919 [2024-11-20 14:29:17.846845] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:05.919 [2024-11-20 14:29:17.846888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410354 ] 00:10:06.178 [2024-11-20 14:29:17.922164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.178 [2024-11-20 14:29:17.961738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.436 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.436 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:06.436 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1410372 00:10:06.436 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1410372 /var/tmp/spdk2.sock 00:10:06.436 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1410372 ']' 00:10:06.436 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:06.436 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.436 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:06.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:06.437 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.437 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:06.437 14:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:06.437 [2024-11-20 14:29:18.237382] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:06.437 [2024-11-20 14:29:18.237433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410372 ] 00:10:06.437 [2024-11-20 14:29:18.329490] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:06.437 [2024-11-20 14:29:18.329521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.695 [2024-11-20 14:29:18.411484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.263 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.263 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:07.263 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1410354 00:10:07.263 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1410354 00:10:07.263 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:07.521 lslocks: write error 00:10:07.521 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1410354 00:10:07.521 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1410354 ']' 00:10:07.521 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1410354 00:10:07.521 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:07.521 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.521 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1410354 00:10:07.780 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.780 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.780 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1410354' 00:10:07.780 killing process with pid 1410354 00:10:07.780 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1410354 00:10:07.780 14:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1410354 00:10:08.346 14:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1410372 00:10:08.346 14:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1410372 ']' 00:10:08.346 14:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1410372 00:10:08.346 14:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:08.346 14:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.346 14:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1410372 00:10:08.346 14:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.347 14:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.347 14:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1410372' 00:10:08.347 killing process with pid 1410372 00:10:08.347 14:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1410372 00:10:08.347 14:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1410372 00:10:08.605 00:10:08.605 real 0m2.671s 00:10:08.605 user 0m2.778s 00:10:08.605 sys 0m0.893s 00:10:08.605 14:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.605 14:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:08.605 ************************************ 00:10:08.605 END TEST non_locking_app_on_locked_coremask 00:10:08.605 ************************************ 00:10:08.605 14:29:20 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:08.605 14:29:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:08.605 14:29:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.605 14:29:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:08.605 ************************************ 00:10:08.605 START TEST locking_app_on_unlocked_coremask 00:10:08.605 ************************************ 00:10:08.605 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:10:08.605 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1410850 00:10:08.605 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1410850 /var/tmp/spdk.sock 00:10:08.605 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:08.605 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1410850 ']' 00:10:08.605 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.605 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.605 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.605 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.605 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:08.864 [2024-11-20 14:29:20.583661] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:08.864 [2024-11-20 14:29:20.583702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410850 ] 00:10:08.864 [2024-11-20 14:29:20.658519] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:08.864 [2024-11-20 14:29:20.658542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.864 [2024-11-20 14:29:20.702171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.122 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.122 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:09.122 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1410866 00:10:09.122 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1410866 /var/tmp/spdk2.sock 00:10:09.123 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:09.123 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1410866 ']' 00:10:09.123 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:09.123 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.123 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:09.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:09.123 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.123 14:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:09.123 [2024-11-20 14:29:20.970426] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:09.123 [2024-11-20 14:29:20.970478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410866 ] 00:10:09.123 [2024-11-20 14:29:21.062421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.382 [2024-11-20 14:29:21.152005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.950 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.950 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:09.950 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1410866 00:10:09.950 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1410866 00:10:09.950 14:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:10.516 lslocks: write error 00:10:10.516 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1410850 00:10:10.516 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1410850 ']' 00:10:10.516 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1410850 00:10:10.516 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:10.516 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.516 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1410850 00:10:10.516 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.516 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.516 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1410850' 00:10:10.516 killing process with pid 1410850 00:10:10.516 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1410850 00:10:10.517 14:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1410850 00:10:11.453 14:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1410866 00:10:11.453 14:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1410866 ']' 00:10:11.453 14:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1410866 00:10:11.453 14:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:11.453 14:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.453 14:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1410866 00:10:11.453 14:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.453 14:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.453 14:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1410866' 00:10:11.453 killing process with pid 1410866 00:10:11.453 14:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1410866 00:10:11.453 14:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1410866 00:10:11.453 00:10:11.453 real 0m2.872s 00:10:11.453 user 0m3.037s 00:10:11.453 sys 0m0.938s 00:10:11.453 14:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.453 14:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:11.453 ************************************ 00:10:11.453 END TEST locking_app_on_unlocked_coremask 00:10:11.453 ************************************ 00:10:11.711 14:29:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:11.711 14:29:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:11.711 14:29:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.711 14:29:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:11.711 ************************************ 00:10:11.711 START TEST locking_app_on_locked_coremask 00:10:11.711 ************************************ 00:10:11.712 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:10:11.712 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1411350 00:10:11.712 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1411350 /var/tmp/spdk.sock 00:10:11.712 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:11.712 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1411350 ']' 00:10:11.712 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.712 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.712 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.712 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.712 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:11.712 [2024-11-20 14:29:23.526454] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:11.712 [2024-11-20 14:29:23.526497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1411350 ] 00:10:11.712 [2024-11-20 14:29:23.597196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.712 [2024-11-20 14:29:23.635504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.973 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.973 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:11.973 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1411472 00:10:11.973 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:11.973 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1411472 /var/tmp/spdk2.sock 00:10:11.973 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:11.973 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1411472 /var/tmp/spdk2.sock 00:10:11.973 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:11.973 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.973 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:11.973 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.973 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1411472 /var/tmp/spdk2.sock 00:10:11.973 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1411472 ']' 00:10:11.973 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:11.973 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.974 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:11.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:11.974 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.974 14:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:11.974 [2024-11-20 14:29:23.919894] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:11.974 [2024-11-20 14:29:23.919944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1411472 ] 00:10:12.317 [2024-11-20 14:29:24.011450] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1411350 has claimed it. 00:10:12.317 [2024-11-20 14:29:24.011489] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:12.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1411472) - No such process 00:10:12.884 ERROR: process (pid: 1411472) is no longer running 00:10:12.884 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.884 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:12.884 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:12.884 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:12.884 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:12.884 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:12.884 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1411350 00:10:12.884 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1411350 00:10:12.884 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:13.143 lslocks: write error 00:10:13.143 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1411350 00:10:13.143 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1411350 ']' 00:10:13.143 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1411350 00:10:13.143 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:13.143 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.143 14:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1411350 00:10:13.143 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.143 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.143 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1411350' 00:10:13.143 killing process with pid 1411350 00:10:13.143 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1411350 00:10:13.143 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1411350 00:10:13.402 00:10:13.402 real 0m1.868s 00:10:13.402 user 0m2.004s 00:10:13.402 sys 0m0.652s 00:10:13.402 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.402 14:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:13.402 ************************************ 00:10:13.402 END TEST locking_app_on_locked_coremask 00:10:13.402 ************************************ 00:10:13.660 14:29:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:13.660 14:29:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:13.660 14:29:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.660 14:29:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:13.660 ************************************ 00:10:13.660 START TEST locking_overlapped_coremask 00:10:13.660 ************************************ 00:10:13.660 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:10:13.660 14:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1411833 00:10:13.660 14:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1411833 /var/tmp/spdk.sock 00:10:13.660 14:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:10:13.660 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1411833 ']' 00:10:13.660 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.660 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.660 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.660 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.660 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:13.660 [2024-11-20 14:29:25.461161] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:13.660 [2024-11-20 14:29:25.461204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1411833 ] 00:10:13.660 [2024-11-20 14:29:25.537119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:13.660 [2024-11-20 14:29:25.582211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.660 [2024-11-20 14:29:25.582317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.660 [2024-11-20 14:29:25.582318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.919 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.919 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:13.919 14:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1411844 00:10:13.919 14:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1411844 /var/tmp/spdk2.sock 00:10:13.919 14:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:13.919 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:13.919 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1411844 /var/tmp/spdk2.sock 00:10:13.919 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:13.919 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.919 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:13.919 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.919 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1411844 /var/tmp/spdk2.sock 00:10:13.919 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1411844 ']' 00:10:13.919 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:13.919 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.919 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:13.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:13.919 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.919 14:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:13.919 [2024-11-20 14:29:25.850397] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:13.919 [2024-11-20 14:29:25.850447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1411844 ] 00:10:14.178 [2024-11-20 14:29:25.942912] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1411833 has claimed it. 00:10:14.178 [2024-11-20 14:29:25.942949] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:14.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1411844) - No such process 00:10:14.746 ERROR: process (pid: 1411844) is no longer running 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1411833 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1411833 ']' 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1411833 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1411833 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1411833' 00:10:14.746 killing process with pid 1411833 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1411833 00:10:14.746 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1411833 00:10:15.005 00:10:15.005 real 0m1.440s 00:10:15.005 user 0m3.952s 00:10:15.005 sys 0m0.405s 00:10:15.005 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.005 14:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:15.005 ************************************ 00:10:15.005 END TEST locking_overlapped_coremask 00:10:15.005 ************************************ 00:10:15.005 14:29:26 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:15.005 14:29:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:15.005 14:29:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.005 14:29:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:15.005 ************************************ 00:10:15.005 START TEST locking_overlapped_coremask_via_rpc 00:10:15.005 ************************************ 00:10:15.005 14:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:10:15.005 14:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1412107 00:10:15.005 14:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1412107 /var/tmp/spdk.sock 00:10:15.005 14:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:15.005 14:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1412107 ']' 00:10:15.005 14:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.005 14:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.005 14:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.005 14:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.005 14:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.264 [2024-11-20 14:29:26.972498] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:15.264 [2024-11-20 14:29:26.972543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412107 ] 00:10:15.264 [2024-11-20 14:29:27.045051] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:15.264 [2024-11-20 14:29:27.045075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.264 [2024-11-20 14:29:27.086563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.264 [2024-11-20 14:29:27.086671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.264 [2024-11-20 14:29:27.086679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.524 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.524 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:15.524 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1412115 00:10:15.524 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1412115 /var/tmp/spdk2.sock 00:10:15.524 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:15.524 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1412115 ']' 00:10:15.524 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:15.524 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.524 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:15.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:15.524 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.524 14:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.524 [2024-11-20 14:29:27.359506] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:15.524 [2024-11-20 14:29:27.359553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412115 ] 00:10:15.524 [2024-11-20 14:29:27.452074] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:15.524 [2024-11-20 14:29:27.452104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.782 [2024-11-20 14:29:27.540088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.782 [2024-11-20 14:29:27.540207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.782 [2024-11-20 14:29:27.540208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:16.349 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.349 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:16.349 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:16.349 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.349 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.349 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.349 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:16.349 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:16.349 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:16.349 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:16.349 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:16.349 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:16.349 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:16.349 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:16.349 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.349 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.349 [2024-11-20 14:29:28.222022] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1412107 has claimed it. 00:10:16.349 request: 00:10:16.349 { 00:10:16.349 "method": "framework_enable_cpumask_locks", 00:10:16.349 "req_id": 1 00:10:16.349 } 00:10:16.349 Got JSON-RPC error response 00:10:16.349 response: 00:10:16.349 { 00:10:16.349 "code": -32603, 00:10:16.349 "message": "Failed to claim CPU core: 2" 00:10:16.349 } 00:10:16.349 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:16.349 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:16.349 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:16.350 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:16.350 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:16.350 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1412107 /var/tmp/spdk.sock 00:10:16.350 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1412107 ']' 00:10:16.350 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.350 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.350 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.350 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.350 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.608 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.608 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:16.608 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1412115 /var/tmp/spdk2.sock 00:10:16.608 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1412115 ']' 00:10:16.608 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:16.608 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.608 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:16.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:16.608 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.608 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.867 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.867 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:16.867 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:16.867 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:16.867 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:16.867 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:16.867 00:10:16.867 real 0m1.716s 00:10:16.867 user 0m0.822s 00:10:16.867 sys 0m0.143s 00:10:16.867 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.867 14:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.867 ************************************ 00:10:16.867 END TEST locking_overlapped_coremask_via_rpc 00:10:16.867 ************************************ 00:10:16.867 14:29:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:16.867 14:29:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1412107 ]] 00:10:16.867 14:29:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1412107 00:10:16.867 14:29:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1412107 ']' 00:10:16.867 14:29:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1412107 00:10:16.867 14:29:28 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:16.867 14:29:28 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.867 14:29:28 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1412107 00:10:16.867 14:29:28 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:16.867 14:29:28 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:16.867 14:29:28 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1412107' 00:10:16.867 killing process with pid 1412107 00:10:16.867 14:29:28 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1412107 00:10:16.867 14:29:28 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1412107 00:10:17.126 14:29:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1412115 ]] 00:10:17.126 14:29:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1412115 00:10:17.126 14:29:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1412115 ']' 00:10:17.126 14:29:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1412115 00:10:17.126 14:29:29 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:17.127 14:29:29 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.127 14:29:29 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1412115 00:10:17.386 14:29:29 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:17.386 14:29:29 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:17.386 14:29:29 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1412115' 00:10:17.386 killing process with pid 1412115 00:10:17.386 14:29:29 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1412115 00:10:17.386 14:29:29 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1412115 00:10:17.645 14:29:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:17.645 14:29:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:17.645 14:29:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1412107 ]] 00:10:17.645 14:29:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1412107 00:10:17.645 14:29:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1412107 ']' 00:10:17.645 14:29:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1412107 00:10:17.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1412107) - No such process 00:10:17.645 14:29:29 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1412107 is not found' 00:10:17.646 Process with pid 1412107 is not found 00:10:17.646 14:29:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1412115 ]] 00:10:17.646 14:29:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1412115 00:10:17.646 14:29:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1412115 ']' 00:10:17.646 14:29:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1412115 00:10:17.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1412115) - No such process 00:10:17.646 14:29:29 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1412115 is not found' 00:10:17.646 Process with pid 1412115 is not found 00:10:17.646 14:29:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:17.646 00:10:17.646 real 0m13.980s 00:10:17.646 user 0m24.313s 00:10:17.646 sys 0m4.915s 00:10:17.646 14:29:29 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.646 14:29:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:17.646 ************************************ 00:10:17.646 END TEST cpu_locks 00:10:17.646 ************************************ 00:10:17.646 00:10:17.646 real 0m39.053s 00:10:17.646 user 1m14.930s 00:10:17.646 sys 0m8.466s 00:10:17.646 14:29:29 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.646 14:29:29 event -- common/autotest_common.sh@10 -- # set +x 00:10:17.646 ************************************ 00:10:17.646 END TEST event 00:10:17.646 ************************************ 00:10:17.646 14:29:29 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:10:17.646 14:29:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:17.646 14:29:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.646 14:29:29 -- common/autotest_common.sh@10 -- # set +x 00:10:17.646 ************************************ 00:10:17.646 START TEST thread 00:10:17.646 ************************************ 00:10:17.646 14:29:29 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:10:17.646 * Looking for test storage... 00:10:17.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:10:17.646 14:29:29 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.646 14:29:29 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.646 14:29:29 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.905 14:29:29 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.905 14:29:29 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.905 14:29:29 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.905 14:29:29 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.905 14:29:29 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.905 14:29:29 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.905 14:29:29 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.905 14:29:29 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.905 14:29:29 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.905 14:29:29 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.905 14:29:29 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.905 14:29:29 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.905 14:29:29 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:17.905 14:29:29 thread -- scripts/common.sh@345 -- # : 1 00:10:17.905 14:29:29 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.905 14:29:29 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.905 14:29:29 thread -- scripts/common.sh@365 -- # decimal 1 00:10:17.905 14:29:29 thread -- scripts/common.sh@353 -- # local d=1 00:10:17.905 14:29:29 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.905 14:29:29 thread -- scripts/common.sh@355 -- # echo 1 00:10:17.905 14:29:29 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.905 14:29:29 thread -- scripts/common.sh@366 -- # decimal 2 00:10:17.905 14:29:29 thread -- scripts/common.sh@353 -- # local d=2 00:10:17.905 14:29:29 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.905 14:29:29 thread -- scripts/common.sh@355 -- # echo 2 00:10:17.905 14:29:29 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.905 14:29:29 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.905 14:29:29 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.905 14:29:29 thread -- scripts/common.sh@368 -- # return 0 00:10:17.905 14:29:29 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.905 14:29:29 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.905 --rc genhtml_branch_coverage=1 00:10:17.905 --rc genhtml_function_coverage=1 00:10:17.905 --rc genhtml_legend=1 00:10:17.905 --rc geninfo_all_blocks=1 00:10:17.905 --rc geninfo_unexecuted_blocks=1 00:10:17.905 00:10:17.905 ' 00:10:17.905 14:29:29 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.905 --rc genhtml_branch_coverage=1 00:10:17.905 --rc genhtml_function_coverage=1 00:10:17.905 --rc genhtml_legend=1 00:10:17.905 --rc geninfo_all_blocks=1 00:10:17.905 --rc geninfo_unexecuted_blocks=1 00:10:17.905 00:10:17.905 ' 00:10:17.905 14:29:29 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.905 --rc genhtml_branch_coverage=1 00:10:17.905 --rc genhtml_function_coverage=1 00:10:17.905 --rc genhtml_legend=1 00:10:17.905 --rc geninfo_all_blocks=1 00:10:17.905 --rc geninfo_unexecuted_blocks=1 00:10:17.905 00:10:17.905 ' 00:10:17.905 14:29:29 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.906 --rc genhtml_branch_coverage=1 00:10:17.906 --rc genhtml_function_coverage=1 00:10:17.906 --rc genhtml_legend=1 00:10:17.906 --rc geninfo_all_blocks=1 00:10:17.906 --rc geninfo_unexecuted_blocks=1 00:10:17.906 00:10:17.906 ' 00:10:17.906 14:29:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:17.906 14:29:29 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:17.906 14:29:29 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.906 14:29:29 thread -- common/autotest_common.sh@10 -- # set +x 00:10:17.906 ************************************ 00:10:17.906 START TEST thread_poller_perf 00:10:17.906 ************************************ 00:10:17.906 14:29:29 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:17.906 [2024-11-20 14:29:29.723603] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:17.906 [2024-11-20 14:29:29.723677] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412676 ] 00:10:17.906 [2024-11-20 14:29:29.801802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.906 [2024-11-20 14:29:29.842280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.906 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:19.282 [2024-11-20T13:29:31.240Z] ====================================== 00:10:19.282 [2024-11-20T13:29:31.240Z] busy:2307548056 (cyc) 00:10:19.282 [2024-11-20T13:29:31.240Z] total_run_count: 399000 00:10:19.282 [2024-11-20T13:29:31.240Z] tsc_hz: 2300000000 (cyc) 00:10:19.282 [2024-11-20T13:29:31.240Z] ====================================== 00:10:19.282 [2024-11-20T13:29:31.240Z] poller_cost: 5783 (cyc), 2514 (nsec) 00:10:19.282 00:10:19.282 real 0m1.188s 00:10:19.282 user 0m1.104s 00:10:19.282 sys 0m0.080s 00:10:19.282 14:29:30 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.282 14:29:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:19.282 ************************************ 00:10:19.282 END TEST thread_poller_perf 00:10:19.282 ************************************ 00:10:19.282 14:29:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:19.282 14:29:30 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:19.282 14:29:30 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.282 14:29:30 thread -- common/autotest_common.sh@10 -- # set +x 00:10:19.282 ************************************ 00:10:19.282 START TEST thread_poller_perf 00:10:19.282 ************************************ 00:10:19.282 14:29:30 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:19.282 [2024-11-20 14:29:30.985150] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:19.282 [2024-11-20 14:29:30.985220] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412926 ] 00:10:19.282 [2024-11-20 14:29:31.059606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.282 [2024-11-20 14:29:31.099165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.282 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:20.218 [2024-11-20T13:29:32.176Z] ====================================== 00:10:20.218 [2024-11-20T13:29:32.176Z] busy:2301377210 (cyc) 00:10:20.218 [2024-11-20T13:29:32.176Z] total_run_count: 5354000 00:10:20.218 [2024-11-20T13:29:32.176Z] tsc_hz: 2300000000 (cyc) 00:10:20.218 [2024-11-20T13:29:32.176Z] ====================================== 00:10:20.218 [2024-11-20T13:29:32.176Z] poller_cost: 429 (cyc), 186 (nsec) 00:10:20.218 00:10:20.218 real 0m1.178s 00:10:20.218 user 0m1.095s 00:10:20.218 sys 0m0.079s 00:10:20.218 14:29:32 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.218 14:29:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:20.218 ************************************ 00:10:20.218 END TEST thread_poller_perf 00:10:20.218 ************************************ 00:10:20.477 14:29:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:20.477 00:10:20.477 real 0m2.678s 00:10:20.477 user 0m2.344s 00:10:20.477 sys 0m0.348s 00:10:20.477 14:29:32 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.477 14:29:32 thread -- common/autotest_common.sh@10 -- # set +x 00:10:20.477 ************************************ 00:10:20.477 END TEST thread 00:10:20.477 ************************************ 00:10:20.477 14:29:32 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:20.477 14:29:32 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:20.477 14:29:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:20.477 14:29:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.477 14:29:32 -- common/autotest_common.sh@10 -- # set +x 00:10:20.477 ************************************ 00:10:20.477 START TEST app_cmdline 00:10:20.477 ************************************ 00:10:20.477 14:29:32 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:20.477 * Looking for test storage... 00:10:20.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:20.477 14:29:32 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:20.477 14:29:32 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:10:20.477 14:29:32 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:20.477 14:29:32 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.477 14:29:32 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:20.477 14:29:32 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.477 14:29:32 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:20.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.477 --rc genhtml_branch_coverage=1 00:10:20.477 --rc genhtml_function_coverage=1 00:10:20.477 --rc genhtml_legend=1 00:10:20.477 --rc geninfo_all_blocks=1 00:10:20.477 --rc geninfo_unexecuted_blocks=1 00:10:20.477 00:10:20.477 ' 00:10:20.477 14:29:32 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:20.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.477 --rc genhtml_branch_coverage=1 00:10:20.477 --rc genhtml_function_coverage=1 00:10:20.477 --rc genhtml_legend=1 00:10:20.477 --rc geninfo_all_blocks=1 00:10:20.477 --rc geninfo_unexecuted_blocks=1 00:10:20.477 00:10:20.477 ' 00:10:20.477 14:29:32 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:20.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.477 --rc genhtml_branch_coverage=1 00:10:20.477 --rc genhtml_function_coverage=1 00:10:20.477 --rc genhtml_legend=1 00:10:20.477 --rc geninfo_all_blocks=1 00:10:20.477 --rc geninfo_unexecuted_blocks=1 00:10:20.477 00:10:20.477 ' 00:10:20.477 14:29:32 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:20.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.477 --rc genhtml_branch_coverage=1 00:10:20.477 --rc genhtml_function_coverage=1 00:10:20.477 --rc genhtml_legend=1 00:10:20.477 --rc geninfo_all_blocks=1 00:10:20.477 --rc geninfo_unexecuted_blocks=1 00:10:20.477 00:10:20.477 ' 00:10:20.477 14:29:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:20.477 14:29:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1413226 00:10:20.477 14:29:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1413226 00:10:20.477 14:29:32 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:20.477 14:29:32 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1413226 ']' 00:10:20.477 14:29:32 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.477 14:29:32 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.477 14:29:32 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.477 14:29:32 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.477 14:29:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:20.736 [2024-11-20 14:29:32.475064] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:20.736 [2024-11-20 14:29:32.475116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1413226 ] 00:10:20.736 [2024-11-20 14:29:32.548721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.736 [2024-11-20 14:29:32.589216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.995 14:29:32 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.995 14:29:32 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:10:20.995 14:29:32 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:10:21.253 { 00:10:21.253 "version": "SPDK v25.01-pre git sha1 d2ebd983e", 00:10:21.253 "fields": { 00:10:21.253 "major": 25, 00:10:21.253 "minor": 1, 00:10:21.253 "patch": 0, 00:10:21.253 "suffix": "-pre", 00:10:21.253 "commit": "d2ebd983e" 00:10:21.253 } 00:10:21.253 } 00:10:21.253 14:29:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:21.253 14:29:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:21.253 14:29:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:21.253 14:29:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:21.253 14:29:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:21.253 14:29:32 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.253 14:29:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:21.253 14:29:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:21.253 14:29:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:21.253 14:29:33 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.253 14:29:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:21.253 14:29:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:21.253 14:29:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:21.253 14:29:33 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:21.253 14:29:33 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:21.253 14:29:33 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:21.253 14:29:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.253 14:29:33 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:21.253 14:29:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.253 14:29:33 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:21.253 14:29:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.253 14:29:33 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:21.253 14:29:33 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:21.253 14:29:33 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:21.512 request: 00:10:21.512 { 00:10:21.512 "method": "env_dpdk_get_mem_stats", 00:10:21.512 "req_id": 1 00:10:21.512 } 00:10:21.512 Got JSON-RPC error response 00:10:21.512 response: 00:10:21.512 { 00:10:21.512 "code": -32601, 00:10:21.512 "message": "Method not found" 00:10:21.512 } 00:10:21.512 14:29:33 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:21.512 14:29:33 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:21.512 14:29:33 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:21.512 14:29:33 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:21.512 14:29:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1413226 00:10:21.512 14:29:33 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1413226 ']' 00:10:21.512 14:29:33 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1413226 00:10:21.512 14:29:33 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:21.512 14:29:33 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.512 14:29:33 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1413226 00:10:21.512 14:29:33 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.512 14:29:33 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.512 14:29:33 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1413226' 00:10:21.512 killing process with pid 1413226 00:10:21.512 14:29:33 app_cmdline -- common/autotest_common.sh@973 -- # kill 1413226 00:10:21.512 14:29:33 app_cmdline -- common/autotest_common.sh@978 -- # wait 1413226 00:10:21.771 00:10:21.771 real 0m1.343s 00:10:21.771 user 0m1.562s 00:10:21.771 sys 0m0.443s 00:10:21.771 14:29:33 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.771 14:29:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:21.771 ************************************ 00:10:21.771 END TEST app_cmdline 00:10:21.771 ************************************ 00:10:21.771 14:29:33 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:21.771 14:29:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:21.771 14:29:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.771 14:29:33 -- common/autotest_common.sh@10 -- # set +x 00:10:21.771 ************************************ 00:10:21.771 START TEST version 00:10:21.771 ************************************ 00:10:21.771 14:29:33 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:22.032 * Looking for test storage... 00:10:22.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:22.032 14:29:33 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:22.032 14:29:33 version -- common/autotest_common.sh@1693 -- # lcov --version 00:10:22.032 14:29:33 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:22.032 14:29:33 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:22.032 14:29:33 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.032 14:29:33 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.032 14:29:33 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.032 14:29:33 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.032 14:29:33 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.032 14:29:33 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.032 14:29:33 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.032 14:29:33 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.032 14:29:33 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.032 14:29:33 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.032 14:29:33 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.032 14:29:33 version -- scripts/common.sh@344 -- # case "$op" in 00:10:22.032 14:29:33 version -- scripts/common.sh@345 -- # : 1 00:10:22.032 14:29:33 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.032 14:29:33 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.032 14:29:33 version -- scripts/common.sh@365 -- # decimal 1 00:10:22.032 14:29:33 version -- scripts/common.sh@353 -- # local d=1 00:10:22.032 14:29:33 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.032 14:29:33 version -- scripts/common.sh@355 -- # echo 1 00:10:22.032 14:29:33 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.032 14:29:33 version -- scripts/common.sh@366 -- # decimal 2 00:10:22.032 14:29:33 version -- scripts/common.sh@353 -- # local d=2 00:10:22.032 14:29:33 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.032 14:29:33 version -- scripts/common.sh@355 -- # echo 2 00:10:22.032 14:29:33 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.032 14:29:33 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.032 14:29:33 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.032 14:29:33 version -- scripts/common.sh@368 -- # return 0 00:10:22.032 14:29:33 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.032 14:29:33 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:22.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.032 --rc genhtml_branch_coverage=1 00:10:22.032 --rc genhtml_function_coverage=1 00:10:22.032 --rc genhtml_legend=1 00:10:22.032 --rc geninfo_all_blocks=1 00:10:22.032 --rc geninfo_unexecuted_blocks=1 00:10:22.032 00:10:22.032 ' 00:10:22.032 14:29:33 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:22.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.032 --rc genhtml_branch_coverage=1 00:10:22.032 --rc genhtml_function_coverage=1 00:10:22.032 --rc genhtml_legend=1 00:10:22.032 --rc geninfo_all_blocks=1 00:10:22.032 --rc geninfo_unexecuted_blocks=1 00:10:22.032 00:10:22.032 ' 00:10:22.032 14:29:33 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:22.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.032 --rc genhtml_branch_coverage=1 00:10:22.032 --rc genhtml_function_coverage=1 00:10:22.032 --rc genhtml_legend=1 00:10:22.032 --rc geninfo_all_blocks=1 00:10:22.032 --rc geninfo_unexecuted_blocks=1 00:10:22.032 00:10:22.032 ' 00:10:22.032 14:29:33 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:22.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.032 --rc genhtml_branch_coverage=1 00:10:22.032 --rc genhtml_function_coverage=1 00:10:22.032 --rc genhtml_legend=1 00:10:22.032 --rc geninfo_all_blocks=1 00:10:22.032 --rc geninfo_unexecuted_blocks=1 00:10:22.032 00:10:22.032 ' 00:10:22.032 14:29:33 version -- app/version.sh@17 -- # get_header_version major 00:10:22.032 14:29:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:22.032 14:29:33 version -- app/version.sh@14 -- # cut -f2 00:10:22.032 14:29:33 version -- app/version.sh@14 -- # tr -d '"' 00:10:22.032 14:29:33 version -- app/version.sh@17 -- # major=25 00:10:22.032 14:29:33 version -- app/version.sh@18 -- # get_header_version minor 00:10:22.032 14:29:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:22.032 14:29:33 version -- app/version.sh@14 -- # cut -f2 00:10:22.032 14:29:33 version -- app/version.sh@14 -- # tr -d '"' 00:10:22.032 14:29:33 version -- app/version.sh@18 -- # minor=1 00:10:22.032 14:29:33 version -- app/version.sh@19 -- # get_header_version patch 00:10:22.032 14:29:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:22.032 14:29:33 version -- app/version.sh@14 -- # cut -f2 00:10:22.032 14:29:33 version -- app/version.sh@14 -- # tr -d '"' 00:10:22.032 14:29:33 version -- app/version.sh@19 -- # patch=0 00:10:22.032 14:29:33 version -- app/version.sh@20 -- # get_header_version suffix 00:10:22.032 14:29:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:22.032 14:29:33 version -- app/version.sh@14 -- # cut -f2 00:10:22.032 14:29:33 version -- app/version.sh@14 -- # tr -d '"' 00:10:22.032 14:29:33 version -- app/version.sh@20 -- # suffix=-pre 00:10:22.032 14:29:33 version -- app/version.sh@22 -- # version=25.1 00:10:22.032 14:29:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:22.032 14:29:33 version -- app/version.sh@28 -- # version=25.1rc0 00:10:22.032 14:29:33 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:22.032 14:29:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:22.032 14:29:33 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:22.032 14:29:33 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:22.032 00:10:22.032 real 0m0.247s 00:10:22.032 user 0m0.148s 00:10:22.032 sys 0m0.143s 00:10:22.032 14:29:33 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.032 14:29:33 version -- common/autotest_common.sh@10 -- # set +x 00:10:22.032 ************************************ 00:10:22.032 END TEST version 00:10:22.032 ************************************ 00:10:22.032 14:29:33 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:22.032 14:29:33 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:22.032 14:29:33 -- spdk/autotest.sh@194 -- # uname -s 00:10:22.032 14:29:33 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:22.032 14:29:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:22.032 14:29:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:22.032 14:29:33 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:10:22.032 14:29:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:10:22.032 14:29:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:10:22.032 14:29:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:22.032 14:29:33 -- common/autotest_common.sh@10 -- # set +x 00:10:22.032 14:29:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:10:22.032 14:29:33 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:10:22.032 14:29:33 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:10:22.033 14:29:33 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:10:22.033 14:29:33 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:10:22.033 14:29:33 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:10:22.033 14:29:33 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:22.033 14:29:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.033 14:29:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.033 14:29:33 -- common/autotest_common.sh@10 -- # set +x 00:10:22.292 ************************************ 00:10:22.292 START TEST nvmf_tcp 00:10:22.293 ************************************ 00:10:22.293 14:29:34 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:22.293 * Looking for test storage... 00:10:22.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:22.293 14:29:34 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:22.293 14:29:34 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:22.293 14:29:34 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:22.293 14:29:34 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.293 14:29:34 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:10:22.293 14:29:34 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.293 14:29:34 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:22.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.293 --rc genhtml_branch_coverage=1 00:10:22.293 --rc genhtml_function_coverage=1 00:10:22.293 --rc genhtml_legend=1 00:10:22.293 --rc geninfo_all_blocks=1 00:10:22.293 --rc geninfo_unexecuted_blocks=1 00:10:22.293 00:10:22.293 ' 00:10:22.293 14:29:34 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:22.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.293 --rc genhtml_branch_coverage=1 00:10:22.293 --rc genhtml_function_coverage=1 00:10:22.293 --rc genhtml_legend=1 00:10:22.293 --rc geninfo_all_blocks=1 00:10:22.293 --rc geninfo_unexecuted_blocks=1 00:10:22.293 00:10:22.293 ' 00:10:22.293 14:29:34 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:22.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.293 --rc genhtml_branch_coverage=1 00:10:22.293 --rc genhtml_function_coverage=1 00:10:22.293 --rc genhtml_legend=1 00:10:22.293 --rc geninfo_all_blocks=1 00:10:22.293 --rc geninfo_unexecuted_blocks=1 00:10:22.293 00:10:22.293 ' 00:10:22.293 14:29:34 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:22.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.293 --rc genhtml_branch_coverage=1 00:10:22.293 --rc genhtml_function_coverage=1 00:10:22.293 --rc genhtml_legend=1 00:10:22.293 --rc geninfo_all_blocks=1 00:10:22.293 --rc geninfo_unexecuted_blocks=1 00:10:22.293 00:10:22.293 ' 00:10:22.293 14:29:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:10:22.293 14:29:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:22.293 14:29:34 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:22.293 14:29:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.293 14:29:34 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.293 14:29:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:22.293 ************************************ 00:10:22.293 START TEST nvmf_target_core 00:10:22.293 ************************************ 00:10:22.293 14:29:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:22.553 * Looking for test storage... 00:10:22.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:22.553 14:29:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:22.553 14:29:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:10:22.553 14:29:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:22.553 14:29:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:22.553 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.553 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.553 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.553 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.553 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.553 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.553 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.553 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.553 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.553 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.553 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.553 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:10:22.553 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:10:22.553 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.553 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:22.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.554 --rc genhtml_branch_coverage=1 00:10:22.554 --rc genhtml_function_coverage=1 00:10:22.554 --rc genhtml_legend=1 00:10:22.554 --rc geninfo_all_blocks=1 00:10:22.554 --rc geninfo_unexecuted_blocks=1 00:10:22.554 00:10:22.554 ' 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:22.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.554 --rc genhtml_branch_coverage=1 00:10:22.554 --rc genhtml_function_coverage=1 00:10:22.554 --rc genhtml_legend=1 00:10:22.554 --rc geninfo_all_blocks=1 00:10:22.554 --rc geninfo_unexecuted_blocks=1 00:10:22.554 00:10:22.554 ' 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:22.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.554 --rc genhtml_branch_coverage=1 00:10:22.554 --rc genhtml_function_coverage=1 00:10:22.554 --rc genhtml_legend=1 00:10:22.554 --rc geninfo_all_blocks=1 00:10:22.554 --rc geninfo_unexecuted_blocks=1 00:10:22.554 00:10:22.554 ' 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:22.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.554 --rc genhtml_branch_coverage=1 00:10:22.554 --rc genhtml_function_coverage=1 00:10:22.554 --rc genhtml_legend=1 00:10:22.554 --rc geninfo_all_blocks=1 00:10:22.554 --rc geninfo_unexecuted_blocks=1 00:10:22.554 00:10:22.554 ' 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.554 ************************************ 00:10:22.554 START TEST nvmf_abort 00:10:22.554 ************************************ 00:10:22.554 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:22.814 * Looking for test storage... 00:10:22.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:22.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.815 --rc genhtml_branch_coverage=1 00:10:22.815 --rc genhtml_function_coverage=1 00:10:22.815 --rc genhtml_legend=1 00:10:22.815 --rc geninfo_all_blocks=1 00:10:22.815 --rc geninfo_unexecuted_blocks=1 00:10:22.815 00:10:22.815 ' 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:22.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.815 --rc genhtml_branch_coverage=1 00:10:22.815 --rc genhtml_function_coverage=1 00:10:22.815 --rc genhtml_legend=1 00:10:22.815 --rc geninfo_all_blocks=1 00:10:22.815 --rc geninfo_unexecuted_blocks=1 00:10:22.815 00:10:22.815 ' 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:22.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.815 --rc genhtml_branch_coverage=1 00:10:22.815 --rc genhtml_function_coverage=1 00:10:22.815 --rc genhtml_legend=1 00:10:22.815 --rc geninfo_all_blocks=1 00:10:22.815 --rc geninfo_unexecuted_blocks=1 00:10:22.815 00:10:22.815 ' 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:22.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.815 --rc genhtml_branch_coverage=1 00:10:22.815 --rc genhtml_function_coverage=1 00:10:22.815 --rc genhtml_legend=1 00:10:22.815 --rc geninfo_all_blocks=1 00:10:22.815 --rc geninfo_unexecuted_blocks=1 00:10:22.815 00:10:22.815 ' 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:22.815 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.816 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:22.816 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:22.816 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:22.816 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.816 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.816 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.816 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:22.816 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:22.816 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:10:22.816 14:29:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:29.386 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.386 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:10:29.386 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:29.386 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:29.387 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:29.387 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:29.387 Found net devices under 0000:86:00.0: cvl_0_0 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:29.387 Found net devices under 0000:86:00.1: cvl_0_1 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:29.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:10:29.387 00:10:29.387 --- 10.0.0.2 ping statistics --- 00:10:29.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.387 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:29.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:10:29.387 00:10:29.387 --- 10.0.0.1 ping statistics --- 00:10:29.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.387 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:29.387 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1416875 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1416875 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1416875 ']' 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:29.388 [2024-11-20 14:29:40.767352] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:29.388 [2024-11-20 14:29:40.767403] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.388 [2024-11-20 14:29:40.849106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:29.388 [2024-11-20 14:29:40.892530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.388 [2024-11-20 14:29:40.892567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.388 [2024-11-20 14:29:40.892574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.388 [2024-11-20 14:29:40.892580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.388 [2024-11-20 14:29:40.892585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.388 [2024-11-20 14:29:40.894065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.388 [2024-11-20 14:29:40.894149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.388 [2024-11-20 14:29:40.894150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:29.388 14:29:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:29.388 [2024-11-20 14:29:41.040436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:29.388 Malloc0 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:29.388 Delay0 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:29.388 [2024-11-20 14:29:41.119686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.388 14:29:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:29.388 [2024-11-20 14:29:41.256231] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:31.924 Initializing NVMe Controllers 00:10:31.924 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:31.924 controller IO queue size 128 less than required 00:10:31.924 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:31.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:31.924 Initialization complete. Launching workers. 00:10:31.924 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36572 00:10:31.924 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36633, failed to submit 62 00:10:31.924 success 36576, unsuccessful 57, failed 0 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.924 rmmod nvme_tcp 00:10:31.924 rmmod nvme_fabrics 00:10:31.924 rmmod nvme_keyring 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1416875 ']' 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1416875 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1416875 ']' 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1416875 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1416875 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1416875' 00:10:31.924 killing process with pid 1416875 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1416875 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1416875 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.924 14:29:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.458 14:29:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:34.458 00:10:34.458 real 0m11.376s 00:10:34.458 user 0m12.120s 00:10:34.458 sys 0m5.534s 00:10:34.458 14:29:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.458 14:29:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:34.458 ************************************ 00:10:34.458 END TEST nvmf_abort 00:10:34.458 ************************************ 00:10:34.458 14:29:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:34.458 14:29:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:34.458 14:29:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.458 14:29:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:34.458 ************************************ 00:10:34.458 START TEST nvmf_ns_hotplug_stress 00:10:34.458 ************************************ 00:10:34.458 14:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:34.458 * Looking for test storage... 00:10:34.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:34.458 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:34.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.459 --rc genhtml_branch_coverage=1 00:10:34.459 --rc genhtml_function_coverage=1 00:10:34.459 --rc genhtml_legend=1 00:10:34.459 --rc geninfo_all_blocks=1 00:10:34.459 --rc geninfo_unexecuted_blocks=1 00:10:34.459 00:10:34.459 ' 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:34.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.459 --rc genhtml_branch_coverage=1 00:10:34.459 --rc genhtml_function_coverage=1 00:10:34.459 --rc genhtml_legend=1 00:10:34.459 --rc geninfo_all_blocks=1 00:10:34.459 --rc geninfo_unexecuted_blocks=1 00:10:34.459 00:10:34.459 ' 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:34.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.459 --rc genhtml_branch_coverage=1 00:10:34.459 --rc genhtml_function_coverage=1 00:10:34.459 --rc genhtml_legend=1 00:10:34.459 --rc geninfo_all_blocks=1 00:10:34.459 --rc geninfo_unexecuted_blocks=1 00:10:34.459 00:10:34.459 ' 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:34.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.459 --rc genhtml_branch_coverage=1 00:10:34.459 --rc genhtml_function_coverage=1 00:10:34.459 --rc genhtml_legend=1 00:10:34.459 --rc geninfo_all_blocks=1 00:10:34.459 --rc geninfo_unexecuted_blocks=1 00:10:34.459 00:10:34.459 ' 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:10:34.459 14:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:41.032 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:41.032 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:41.032 Found net devices under 0000:86:00.0: cvl_0_0 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:41.032 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:41.033 Found net devices under 0000:86:00.1: cvl_0_1 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:41.033 14:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:41.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:10:41.033 00:10:41.033 --- 10.0.0.2 ping statistics --- 00:10:41.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.033 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:41.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:10:41.033 00:10:41.033 --- 10.0.0.1 ping statistics --- 00:10:41.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.033 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1420929 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1420929 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1420929 ']' 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.033 14:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:41.033 [2024-11-20 14:29:52.189607] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:10:41.033 [2024-11-20 14:29:52.189650] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.033 [2024-11-20 14:29:52.267759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:41.033 [2024-11-20 14:29:52.309641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.033 [2024-11-20 14:29:52.309675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.033 [2024-11-20 14:29:52.309682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.033 [2024-11-20 14:29:52.309689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.033 [2024-11-20 14:29:52.309694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.033 [2024-11-20 14:29:52.312967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.033 [2024-11-20 14:29:52.313070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.033 [2024-11-20 14:29:52.313071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.292 14:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.292 14:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:10:41.292 14:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:41.292 14:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:41.292 14:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:41.292 14:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.292 14:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:41.292 14:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:41.292 [2024-11-20 14:29:53.241295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.551 14:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:41.551 14:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:41.809 [2024-11-20 14:29:53.650792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:41.809 14:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:42.068 14:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:42.326 Malloc0 00:10:42.326 14:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:42.326 Delay0 00:10:42.587 14:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.587 14:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:42.849 NULL1 00:10:42.849 14:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:43.108 14:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1421426 00:10:43.108 14:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:43.108 14:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:10:43.108 14:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.367 Read completed with error (sct=0, sc=11) 00:10:43.367 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.367 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:43.367 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:43.626 true 00:10:43.626 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:10:43.626 14:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.563 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.822 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:44.822 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:44.822 true 00:10:44.822 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:10:44.822 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.080 14:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.338 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:45.338 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:45.338 true 00:10:45.597 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:10:45.597 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.597 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.856 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:45.856 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:46.115 true 00:10:46.115 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:10:46.115 14:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.050 14:29:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.050 14:29:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:47.050 14:29:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:47.309 true 00:10:47.309 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:10:47.309 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.568 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.826 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:47.826 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:47.826 true 00:10:47.826 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:10:47.826 14:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.201 14:30:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.201 14:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:49.201 14:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:49.459 true 00:10:49.459 14:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:10:49.459 14:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.396 14:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.396 14:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:50.396 14:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:50.655 true 00:10:50.655 14:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:10:50.655 14:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.914 14:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.173 14:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:51.173 14:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:51.173 true 00:10:51.173 14:30:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:10:51.173 14:30:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.549 14:30:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.549 14:30:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:52.549 14:30:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:52.808 true 00:10:52.808 14:30:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:10:52.808 14:30:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.743 14:30:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.743 14:30:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:53.743 14:30:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:54.002 true 00:10:54.002 14:30:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:10:54.002 14:30:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.260 14:30:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.518 14:30:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:54.518 14:30:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:54.518 true 00:10:54.518 14:30:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:10:54.519 14:30:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.896 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.896 14:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.896 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.896 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.896 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.896 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.896 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.896 14:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:55.896 14:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:56.155 true 00:10:56.155 14:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:10:56.155 14:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.091 14:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.091 14:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:57.091 14:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:57.350 true 00:10:57.350 14:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:10:57.350 14:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.608 14:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.867 14:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:57.867 14:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:57.867 true 00:10:58.125 14:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:10:58.125 14:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:59.062 14:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.062 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:59.062 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:59.321 true 00:10:59.321 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:10:59.321 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.583 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.842 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:59.842 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:59.842 true 00:11:00.101 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:11:00.101 14:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:01.043 14:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:01.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:01.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:01.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:01.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:01.305 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:01.305 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:01.579 true 00:11:01.579 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:11:01.579 14:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.233 14:30:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.492 14:30:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:02.492 14:30:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:02.751 true 00:11:02.751 14:30:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:11:02.751 14:30:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.010 14:30:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.010 14:30:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:03.010 14:30:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:03.269 true 00:11:03.269 14:30:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:11:03.269 14:30:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.646 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.646 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:04.646 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:04.905 true 00:11:04.905 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:11:04.905 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.905 14:30:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.163 14:30:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:05.163 14:30:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:05.422 true 00:11:05.422 14:30:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:11:05.422 14:30:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.359 14:30:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.619 14:30:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:06.619 14:30:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:06.878 true 00:11:06.878 14:30:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:11:06.878 14:30:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.136 14:30:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.136 14:30:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:07.136 14:30:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:07.393 true 00:11:07.393 14:30:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:11:07.393 14:30:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.586 14:30:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.586 14:30:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:08.586 14:30:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:08.845 true 00:11:08.845 14:30:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:11:08.845 14:30:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.781 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.781 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:09.781 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:10.040 true 00:11:10.040 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:11:10.040 14:30:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.299 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.557 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:10.557 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:10.557 true 00:11:10.815 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:11:10.815 14:30:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.752 14:30:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:12.011 14:30:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:12.011 14:30:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:12.270 true 00:11:12.270 14:30:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:11:12.270 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.270 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.529 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:12.529 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:12.788 true 00:11:12.788 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:11:12.788 14:30:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.165 Initializing NVMe Controllers 00:11:14.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:14.165 Controller IO queue size 128, less than required. 00:11:14.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:14.165 Controller IO queue size 128, less than required. 00:11:14.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:14.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:14.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:14.165 Initialization complete. Launching workers. 00:11:14.165 ======================================================== 00:11:14.165 Latency(us) 00:11:14.165 Device Information : IOPS MiB/s Average min max 00:11:14.165 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1780.93 0.87 46739.83 1899.38 1098515.90 00:11:14.165 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15994.81 7.81 8002.46 1614.77 457295.42 00:11:14.165 ======================================================== 00:11:14.165 Total : 17775.75 8.68 11883.51 1614.77 1098515.90 00:11:14.165 00:11:14.165 14:30:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.165 14:30:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:14.165 14:30:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:14.424 true 00:11:14.424 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1421426 00:11:14.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1421426) - No such process 00:11:14.424 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1421426 00:11:14.424 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.683 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:14.683 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:14.683 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:14.683 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:14.683 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:14.683 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:14.941 null0 00:11:14.941 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:14.941 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:14.941 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:15.200 null1 00:11:15.200 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:15.200 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:15.200 14:30:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:15.460 null2 00:11:15.460 14:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:15.460 14:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:15.460 14:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:15.460 null3 00:11:15.460 14:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:15.460 14:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:15.460 14:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:15.719 null4 00:11:15.719 14:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:15.719 14:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:15.719 14:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:15.978 null5 00:11:15.978 14:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:15.978 14:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:15.978 14:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:16.238 null6 00:11:16.238 14:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:16.238 14:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:16.238 14:30:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:16.238 null7 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.238 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:16.497 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1427558 1427559 1427562 1427563 1427565 1427567 1427569 1427571 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:16.498 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.757 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:17.015 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:17.015 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.015 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:17.015 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:17.015 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:17.015 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:17.016 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:17.016 14:30:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.274 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:17.533 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:17.533 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:17.533 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:17.533 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:17.533 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.534 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:17.794 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.794 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.794 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:17.794 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:17.794 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:17.794 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:17.794 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:17.794 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.794 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:17.794 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:17.794 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.054 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.055 14:30:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:18.313 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:18.313 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.313 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:18.313 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:18.313 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:18.313 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:18.313 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:18.313 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:18.572 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.572 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.572 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:18.572 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.572 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.572 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:18.572 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.572 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.572 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:18.572 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.572 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.572 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:18.572 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.572 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.572 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:18.572 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.572 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.573 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:18.573 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.573 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.573 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:18.573 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.573 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.573 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:18.573 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:18.573 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:18.573 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:18.832 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:19.091 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:19.091 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:19.091 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:19.091 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:19.091 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:19.091 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:19.091 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.091 14:30:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.350 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:19.609 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:19.609 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:19.609 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:19.609 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:19.609 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:19.609 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:19.609 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.609 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:19.609 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.609 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.609 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:19.609 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.609 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.609 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:19.867 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.126 14:30:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.126 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:20.126 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.127 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.127 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:20.385 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:20.385 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.386 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:20.386 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:20.386 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:20.386 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:20.386 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:20.386 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:20.644 rmmod nvme_tcp 00:11:20.644 rmmod nvme_fabrics 00:11:20.644 rmmod nvme_keyring 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1420929 ']' 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1420929 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1420929 ']' 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1420929 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1420929 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1420929' 00:11:20.644 killing process with pid 1420929 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1420929 00:11:20.644 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1420929 00:11:20.904 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:20.904 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:20.904 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:20.904 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:11:20.904 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:20.904 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:20.904 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:20.904 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:20.904 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:20.904 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.904 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.904 14:30:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.441 14:30:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:23.441 00:11:23.441 real 0m48.871s 00:11:23.441 user 3m18.940s 00:11:23.441 sys 0m15.611s 00:11:23.441 14:30:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.441 14:30:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.441 ************************************ 00:11:23.441 END TEST nvmf_ns_hotplug_stress 00:11:23.441 ************************************ 00:11:23.441 14:30:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:23.441 14:30:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:23.441 14:30:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.441 14:30:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:23.441 ************************************ 00:11:23.441 START TEST nvmf_delete_subsystem 00:11:23.441 ************************************ 00:11:23.441 14:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:23.441 * Looking for test storage... 00:11:23.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.441 14:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:23.441 14:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:23.441 14:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:23.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.441 --rc genhtml_branch_coverage=1 00:11:23.441 --rc genhtml_function_coverage=1 00:11:23.441 --rc genhtml_legend=1 00:11:23.441 --rc geninfo_all_blocks=1 00:11:23.441 --rc geninfo_unexecuted_blocks=1 00:11:23.441 00:11:23.441 ' 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:23.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.441 --rc genhtml_branch_coverage=1 00:11:23.441 --rc genhtml_function_coverage=1 00:11:23.441 --rc genhtml_legend=1 00:11:23.441 --rc geninfo_all_blocks=1 00:11:23.441 --rc geninfo_unexecuted_blocks=1 00:11:23.441 00:11:23.441 ' 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:23.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.441 --rc genhtml_branch_coverage=1 00:11:23.441 --rc genhtml_function_coverage=1 00:11:23.441 --rc genhtml_legend=1 00:11:23.441 --rc geninfo_all_blocks=1 00:11:23.441 --rc geninfo_unexecuted_blocks=1 00:11:23.441 00:11:23.441 ' 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:23.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.441 --rc genhtml_branch_coverage=1 00:11:23.441 --rc genhtml_function_coverage=1 00:11:23.441 --rc genhtml_legend=1 00:11:23.441 --rc geninfo_all_blocks=1 00:11:23.441 --rc geninfo_unexecuted_blocks=1 00:11:23.441 00:11:23.441 ' 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.441 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:23.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:23.442 14:30:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:30.015 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:30.015 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:30.015 Found net devices under 0000:86:00.0: cvl_0_0 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:30.015 Found net devices under 0000:86:00.1: cvl_0_1 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:30.015 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:30.016 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:30.016 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:30.016 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:30.016 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.016 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:30.016 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:30.016 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:30.016 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:30.016 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:30.016 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:30.016 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:30.016 14:30:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:30.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:11:30.016 00:11:30.016 --- 10.0.0.2 ping statistics --- 00:11:30.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.016 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:30.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:11:30.016 00:11:30.016 --- 10.0.0.1 ping statistics --- 00:11:30.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.016 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1431960 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1431960 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1431960 ']' 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:30.016 [2024-11-20 14:30:41.141175] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:11:30.016 [2024-11-20 14:30:41.141219] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.016 [2024-11-20 14:30:41.221139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:30.016 [2024-11-20 14:30:41.263093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.016 [2024-11-20 14:30:41.263126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.016 [2024-11-20 14:30:41.263134] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.016 [2024-11-20 14:30:41.263140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.016 [2024-11-20 14:30:41.263145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.016 [2024-11-20 14:30:41.264291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.016 [2024-11-20 14:30:41.264294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:30.016 [2024-11-20 14:30:41.413776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:30.016 [2024-11-20 14:30:41.433979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:30.016 NULL1 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:30.016 Delay0 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1432055 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:30.016 14:30:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:30.016 [2024-11-20 14:30:41.544913] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:31.922 14:30:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.922 14:30:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.922 14:30:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 [2024-11-20 14:30:43.660050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a8860 is same with the state(6) to be set 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 starting I/O failed: -6 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 [2024-11-20 14:30:43.665062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efc54000c40 is same with the state(6) to be set 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Write completed with error (sct=0, sc=8) 00:11:31.922 Read completed with error (sct=0, sc=8) 00:11:31.923 Write completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Write completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Write completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Write completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Write completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Write completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Write completed with error (sct=0, sc=8) 00:11:31.923 Write completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Write completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Read completed with error (sct=0, sc=8) 00:11:31.923 Write completed with error (sct=0, sc=8) 00:11:31.923 Write completed with error (sct=0, sc=8) 00:11:31.923 Write completed with error (sct=0, sc=8) 00:11:32.861 [2024-11-20 14:30:44.639641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a99a0 is same with the state(6) to be set 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 [2024-11-20 14:30:44.663226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a82c0 is same with the state(6) to be set 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 [2024-11-20 14:30:44.663427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a8680 is same with the state(6) to be set 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 [2024-11-20 14:30:44.667276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efc5400d020 is same with the state(6) to be set 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Read completed with error (sct=0, sc=8) 00:11:32.861 Write completed with error (sct=0, sc=8) 00:11:32.861 [2024-11-20 14:30:44.667760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efc5400d800 is same with the state(6) to be set 00:11:32.861 Initializing NVMe Controllers 00:11:32.861 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:32.861 Controller IO queue size 128, less than required. 00:11:32.861 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:32.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:32.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:32.861 Initialization complete. Launching workers. 00:11:32.861 ======================================================== 00:11:32.861 Latency(us) 00:11:32.861 Device Information : IOPS MiB/s Average min max 00:11:32.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.78 0.08 896598.14 285.98 1006419.10 00:11:32.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.81 0.08 912648.25 264.99 1010208.83 00:11:32.861 ======================================================== 00:11:32.861 Total : 331.59 0.16 904478.60 264.99 1010208.83 00:11:32.861 00:11:32.861 [2024-11-20 14:30:44.668234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a99a0 (9): Bad file descriptor 00:11:32.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:32.861 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.861 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:32.861 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1432055 00:11:32.861 14:30:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1432055 00:11:33.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1432055) - No such process 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1432055 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1432055 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1432055 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.429 [2024-11-20 14:30:45.193790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1432678 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1432678 00:11:33.429 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:33.429 [2024-11-20 14:30:45.286401] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:33.996 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:33.996 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1432678 00:11:33.996 14:30:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:34.563 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:34.563 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1432678 00:11:34.563 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:34.821 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:34.821 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1432678 00:11:34.821 14:30:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:35.389 14:30:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:35.389 14:30:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1432678 00:11:35.389 14:30:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:35.957 14:30:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:35.957 14:30:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1432678 00:11:35.957 14:30:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:36.525 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:36.525 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1432678 00:11:36.525 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:36.784 Initializing NVMe Controllers 00:11:36.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:36.784 Controller IO queue size 128, less than required. 00:11:36.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:36.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:36.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:36.784 Initialization complete. Launching workers. 00:11:36.784 ======================================================== 00:11:36.784 Latency(us) 00:11:36.784 Device Information : IOPS MiB/s Average min max 00:11:36.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002639.55 1000128.52 1042259.83 00:11:36.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003440.38 1000207.13 1010548.65 00:11:36.784 ======================================================== 00:11:36.784 Total : 256.00 0.12 1003039.97 1000128.52 1042259.83 00:11:36.784 00:11:36.784 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:36.784 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1432678 00:11:36.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1432678) - No such process 00:11:36.784 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1432678 00:11:36.784 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:36.784 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:37.043 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:37.044 rmmod nvme_tcp 00:11:37.044 rmmod nvme_fabrics 00:11:37.044 rmmod nvme_keyring 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1431960 ']' 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1431960 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1431960 ']' 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1431960 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1431960 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1431960' 00:11:37.044 killing process with pid 1431960 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1431960 00:11:37.044 14:30:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1431960 00:11:37.304 14:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:37.304 14:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:37.304 14:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:37.304 14:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:11:37.304 14:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:11:37.304 14:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:37.304 14:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:37.304 14:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:37.304 14:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:37.304 14:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.304 14:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.304 14:30:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.210 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:39.210 00:11:39.210 real 0m16.217s 00:11:39.210 user 0m29.180s 00:11:39.210 sys 0m5.570s 00:11:39.210 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.210 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:39.210 ************************************ 00:11:39.210 END TEST nvmf_delete_subsystem 00:11:39.210 ************************************ 00:11:39.210 14:30:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:39.210 14:30:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:39.210 14:30:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.210 14:30:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:39.210 ************************************ 00:11:39.210 START TEST nvmf_host_management 00:11:39.210 ************************************ 00:11:39.210 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:39.470 * Looking for test storage... 00:11:39.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:39.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.470 --rc genhtml_branch_coverage=1 00:11:39.470 --rc genhtml_function_coverage=1 00:11:39.470 --rc genhtml_legend=1 00:11:39.470 --rc geninfo_all_blocks=1 00:11:39.470 --rc geninfo_unexecuted_blocks=1 00:11:39.470 00:11:39.470 ' 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:39.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.470 --rc genhtml_branch_coverage=1 00:11:39.470 --rc genhtml_function_coverage=1 00:11:39.470 --rc genhtml_legend=1 00:11:39.470 --rc geninfo_all_blocks=1 00:11:39.470 --rc geninfo_unexecuted_blocks=1 00:11:39.470 00:11:39.470 ' 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:39.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.470 --rc genhtml_branch_coverage=1 00:11:39.470 --rc genhtml_function_coverage=1 00:11:39.470 --rc genhtml_legend=1 00:11:39.470 --rc geninfo_all_blocks=1 00:11:39.470 --rc geninfo_unexecuted_blocks=1 00:11:39.470 00:11:39.470 ' 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:39.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.470 --rc genhtml_branch_coverage=1 00:11:39.470 --rc genhtml_function_coverage=1 00:11:39.470 --rc genhtml_legend=1 00:11:39.470 --rc geninfo_all_blocks=1 00:11:39.470 --rc geninfo_unexecuted_blocks=1 00:11:39.470 00:11:39.470 ' 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.470 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:39.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:11:39.471 14:30:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:46.044 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:46.044 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.044 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:46.045 Found net devices under 0000:86:00.0: cvl_0_0 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:46.045 Found net devices under 0000:86:00.1: cvl_0_1 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:46.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:11:46.045 00:11:46.045 --- 10.0.0.2 ping statistics --- 00:11:46.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.045 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:46.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:11:46.045 00:11:46.045 --- 10.0.0.1 ping statistics --- 00:11:46.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.045 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1436908 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1436908 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1436908 ']' 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.045 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.046 [2024-11-20 14:30:57.450682] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:11:46.046 [2024-11-20 14:30:57.450727] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.046 [2024-11-20 14:30:57.529314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:46.046 [2024-11-20 14:30:57.572788] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.046 [2024-11-20 14:30:57.572828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.046 [2024-11-20 14:30:57.572837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.046 [2024-11-20 14:30:57.572843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.046 [2024-11-20 14:30:57.572848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.046 [2024-11-20 14:30:57.574498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.046 [2024-11-20 14:30:57.574606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.046 [2024-11-20 14:30:57.574710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.046 [2024-11-20 14:30:57.574712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.046 [2024-11-20 14:30:57.713170] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.046 Malloc0 00:11:46.046 [2024-11-20 14:30:57.780842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1436949 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1436949 /var/tmp/bdevperf.sock 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1436949 ']' 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:46.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:46.046 { 00:11:46.046 "params": { 00:11:46.046 "name": "Nvme$subsystem", 00:11:46.046 "trtype": "$TEST_TRANSPORT", 00:11:46.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:46.046 "adrfam": "ipv4", 00:11:46.046 "trsvcid": "$NVMF_PORT", 00:11:46.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:46.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:46.046 "hdgst": ${hdgst:-false}, 00:11:46.046 "ddgst": ${ddgst:-false} 00:11:46.046 }, 00:11:46.046 "method": "bdev_nvme_attach_controller" 00:11:46.046 } 00:11:46.046 EOF 00:11:46.046 )") 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:46.046 14:30:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:46.046 "params": { 00:11:46.046 "name": "Nvme0", 00:11:46.046 "trtype": "tcp", 00:11:46.046 "traddr": "10.0.0.2", 00:11:46.046 "adrfam": "ipv4", 00:11:46.046 "trsvcid": "4420", 00:11:46.046 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:46.046 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:46.046 "hdgst": false, 00:11:46.046 "ddgst": false 00:11:46.046 }, 00:11:46.046 "method": "bdev_nvme_attach_controller" 00:11:46.046 }' 00:11:46.046 [2024-11-20 14:30:57.877229] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:11:46.046 [2024-11-20 14:30:57.877271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436949 ] 00:11:46.046 [2024-11-20 14:30:57.954335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.046 [2024-11-20 14:30:57.995781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.305 Running I/O for 10 seconds... 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=99 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 99 -ge 100 ']' 00:11:46.305 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:11:46.564 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:11:46.564 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:46.564 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:46.564 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:46.564 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.564 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.825 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.825 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=654 00:11:46.825 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 654 -ge 100 ']' 00:11:46.825 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:46.825 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:46.825 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:46.825 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:46.825 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.825 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.825 [2024-11-20 14:30:58.555375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.555540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12510b0 is same with the state(6) to be set 00:11:46.825 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.825 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:46.825 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.825 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:46.825 [2024-11-20 14:30:58.561910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.825 [2024-11-20 14:30:58.561943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.825 [2024-11-20 14:30:58.561959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.825 [2024-11-20 14:30:58.561966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.825 [2024-11-20 14:30:58.561974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.825 [2024-11-20 14:30:58.561981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.825 [2024-11-20 14:30:58.561989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.825 [2024-11-20 14:30:58.561996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.825 [2024-11-20 14:30:58.562007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de1500 is same with the state(6) to be set 00:11:46.825 [2024-11-20 14:30:58.562266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.825 [2024-11-20 14:30:58.562277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.825 [2024-11-20 14:30:58.562290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.825 [2024-11-20 14:30:58.562298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.825 [2024-11-20 14:30:58.562306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.825 [2024-11-20 14:30:58.562313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.825 [2024-11-20 14:30:58.562321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.825 [2024-11-20 14:30:58.562328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.825 [2024-11-20 14:30:58.562336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.825 [2024-11-20 14:30:58.562343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.825 [2024-11-20 14:30:58.562351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.825 [2024-11-20 14:30:58.562357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.826 [2024-11-20 14:30:58.562906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.826 [2024-11-20 14:30:58.562914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.562921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.562929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.562935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.562944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.562956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.562964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.562971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.562979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.562985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.562994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.563001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.563010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.563017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.563025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.563032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.563040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.563046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.563055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.563061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.563069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.563076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.563083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.563091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.563100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.563106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.563115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.563121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.563129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.563136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.563144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.563150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.563158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.563165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.563173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.563180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.563188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.563197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.563205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.563212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.563220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:46.827 [2024-11-20 14:30:58.563227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.827 [2024-11-20 14:30:58.564178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:11:46.827 task offset: 98304 on job bdev=Nvme0n1 fails 00:11:46.827 00:11:46.827 Latency(us) 00:11:46.827 [2024-11-20T13:30:58.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.827 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:46.827 Job: Nvme0n1 ended in about 0.40 seconds with error 00:11:46.827 Verification LBA range: start 0x0 length 0x400 00:11:46.827 Nvme0n1 : 0.40 1897.48 118.59 158.12 0.00 30286.00 1631.28 28038.01 00:11:46.827 [2024-11-20T13:30:58.785Z] =================================================================================================================== 00:11:46.827 [2024-11-20T13:30:58.785Z] Total : 1897.48 118.59 158.12 0.00 30286.00 1631.28 28038.01 00:11:46.827 [2024-11-20 14:30:58.566569] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:46.827 [2024-11-20 14:30:58.566590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de1500 (9): Bad file descriptor 00:11:46.827 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.827 14:30:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:46.827 [2024-11-20 14:30:58.577104] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:11:47.760 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1436949 00:11:47.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1436949) - No such process 00:11:47.760 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:47.760 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:47.760 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:47.760 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:47.760 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:47.760 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:47.760 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:47.760 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:47.760 { 00:11:47.760 "params": { 00:11:47.760 "name": "Nvme$subsystem", 00:11:47.760 "trtype": "$TEST_TRANSPORT", 00:11:47.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:47.760 "adrfam": "ipv4", 00:11:47.760 "trsvcid": "$NVMF_PORT", 00:11:47.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:47.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:47.760 "hdgst": ${hdgst:-false}, 00:11:47.760 "ddgst": ${ddgst:-false} 00:11:47.760 }, 00:11:47.760 "method": "bdev_nvme_attach_controller" 00:11:47.760 } 00:11:47.760 EOF 00:11:47.760 )") 00:11:47.760 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:47.760 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:47.760 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:47.760 14:30:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:47.760 "params": { 00:11:47.760 "name": "Nvme0", 00:11:47.760 "trtype": "tcp", 00:11:47.760 "traddr": "10.0.0.2", 00:11:47.760 "adrfam": "ipv4", 00:11:47.760 "trsvcid": "4420", 00:11:47.760 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:47.760 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:47.760 "hdgst": false, 00:11:47.760 "ddgst": false 00:11:47.760 }, 00:11:47.760 "method": "bdev_nvme_attach_controller" 00:11:47.760 }' 00:11:47.760 [2024-11-20 14:30:59.623409] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:11:47.760 [2024-11-20 14:30:59.623455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1437231 ] 00:11:47.760 [2024-11-20 14:30:59.701073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.017 [2024-11-20 14:30:59.741340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.017 Running I/O for 1 seconds... 00:11:49.212 1984.00 IOPS, 124.00 MiB/s 00:11:49.212 Latency(us) 00:11:49.212 [2024-11-20T13:31:01.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:49.212 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:49.212 Verification LBA range: start 0x0 length 0x400 00:11:49.212 Nvme0n1 : 1.02 2006.03 125.38 0.00 0.00 31397.55 4188.61 27810.06 00:11:49.212 [2024-11-20T13:31:01.170Z] =================================================================================================================== 00:11:49.212 [2024-11-20T13:31:01.170Z] Total : 2006.03 125.38 0.00 0.00 31397.55 4188.61 27810.06 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:49.212 rmmod nvme_tcp 00:11:49.212 rmmod nvme_fabrics 00:11:49.212 rmmod nvme_keyring 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1436908 ']' 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1436908 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1436908 ']' 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1436908 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.212 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1436908 00:11:49.471 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:49.471 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:49.471 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1436908' 00:11:49.471 killing process with pid 1436908 00:11:49.471 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1436908 00:11:49.471 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1436908 00:11:49.471 [2024-11-20 14:31:01.365070] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:49.471 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:49.471 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:49.471 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:49.471 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:11:49.471 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:11:49.471 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:49.471 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:11:49.471 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:49.471 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:49.471 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.471 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.471 14:31:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.522 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:51.522 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:51.522 00:11:51.522 real 0m12.303s 00:11:51.522 user 0m19.060s 00:11:51.522 sys 0m5.584s 00:11:51.522 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.522 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:51.522 ************************************ 00:11:51.522 END TEST nvmf_host_management 00:11:51.522 ************************************ 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:51.782 ************************************ 00:11:51.782 START TEST nvmf_lvol 00:11:51.782 ************************************ 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:51.782 * Looking for test storage... 00:11:51.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:51.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.782 --rc genhtml_branch_coverage=1 00:11:51.782 --rc genhtml_function_coverage=1 00:11:51.782 --rc genhtml_legend=1 00:11:51.782 --rc geninfo_all_blocks=1 00:11:51.782 --rc geninfo_unexecuted_blocks=1 00:11:51.782 00:11:51.782 ' 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:51.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.782 --rc genhtml_branch_coverage=1 00:11:51.782 --rc genhtml_function_coverage=1 00:11:51.782 --rc genhtml_legend=1 00:11:51.782 --rc geninfo_all_blocks=1 00:11:51.782 --rc geninfo_unexecuted_blocks=1 00:11:51.782 00:11:51.782 ' 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:51.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.782 --rc genhtml_branch_coverage=1 00:11:51.782 --rc genhtml_function_coverage=1 00:11:51.782 --rc genhtml_legend=1 00:11:51.782 --rc geninfo_all_blocks=1 00:11:51.782 --rc geninfo_unexecuted_blocks=1 00:11:51.782 00:11:51.782 ' 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:51.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.782 --rc genhtml_branch_coverage=1 00:11:51.782 --rc genhtml_function_coverage=1 00:11:51.782 --rc genhtml_legend=1 00:11:51.782 --rc geninfo_all_blocks=1 00:11:51.782 --rc geninfo_unexecuted_blocks=1 00:11:51.782 00:11:51.782 ' 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.782 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.783 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.783 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:51.783 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.783 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:11:51.783 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:51.783 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:51.783 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.783 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.783 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.783 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:51.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:51.783 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:51.783 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:51.783 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:52.042 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:52.042 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:52.042 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:52.042 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:52.042 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.042 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:52.042 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:52.042 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.042 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:52.042 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:52.042 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:52.042 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.042 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.042 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.042 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:52.042 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:52.042 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:11:52.042 14:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:58.615 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:58.615 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:58.615 Found net devices under 0000:86:00.0: cvl_0_0 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:58.615 Found net devices under 0000:86:00.1: cvl_0_1 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:58.615 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:58.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:11:58.616 00:11:58.616 --- 10.0.0.2 ping statistics --- 00:11:58.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.616 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:58.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:11:58.616 00:11:58.616 --- 10.0.0.1 ping statistics --- 00:11:58.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.616 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1441202 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1441202 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1441202 ']' 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.616 14:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:58.616 [2024-11-20 14:31:09.855643] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:11:58.616 [2024-11-20 14:31:09.855688] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.616 [2024-11-20 14:31:09.936041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:58.616 [2024-11-20 14:31:09.978673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.616 [2024-11-20 14:31:09.978710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.616 [2024-11-20 14:31:09.978717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.616 [2024-11-20 14:31:09.978723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.616 [2024-11-20 14:31:09.978728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.616 [2024-11-20 14:31:09.980144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.616 [2024-11-20 14:31:09.980256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.616 [2024-11-20 14:31:09.980257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.875 14:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.875 14:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:11:58.875 14:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:58.875 14:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:58.875 14:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:58.875 14:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.875 14:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:59.133 [2024-11-20 14:31:10.908852] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:59.133 14:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:59.391 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:59.391 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:59.649 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:59.649 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:59.649 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:59.908 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=36b52736-6955-4380-9306-77006530e262 00:11:59.908 14:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 36b52736-6955-4380-9306-77006530e262 lvol 20 00:12:00.167 14:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ae11c109-331a-4519-9e6e-2b942761954a 00:12:00.167 14:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:00.426 14:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ae11c109-331a-4519-9e6e-2b942761954a 00:12:00.740 14:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:00.740 [2024-11-20 14:31:12.600752] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.740 14:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:00.999 14:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1441701 00:12:00.999 14:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:00.999 14:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:01.936 14:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ae11c109-331a-4519-9e6e-2b942761954a MY_SNAPSHOT 00:12:02.196 14:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e2d6c346-f805-4f1e-b8de-d63dc33ca94e 00:12:02.196 14:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ae11c109-331a-4519-9e6e-2b942761954a 30 00:12:02.454 14:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e2d6c346-f805-4f1e-b8de-d63dc33ca94e MY_CLONE 00:12:02.713 14:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=66b8d49d-b498-42b2-9746-65a95f29cbb2 00:12:02.713 14:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 66b8d49d-b498-42b2-9746-65a95f29cbb2 00:12:03.280 14:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1441701 00:12:11.397 Initializing NVMe Controllers 00:12:11.397 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:11.397 Controller IO queue size 128, less than required. 00:12:11.397 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:11.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:11.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:11.397 Initialization complete. Launching workers. 00:12:11.397 ======================================================== 00:12:11.397 Latency(us) 00:12:11.397 Device Information : IOPS MiB/s Average min max 00:12:11.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11477.80 44.84 11155.83 1609.56 58941.59 00:12:11.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11571.00 45.20 11060.76 1430.18 80256.92 00:12:11.397 ======================================================== 00:12:11.397 Total : 23048.80 90.03 11108.11 1430.18 80256.92 00:12:11.397 00:12:11.397 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:11.655 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ae11c109-331a-4519-9e6e-2b942761954a 00:12:11.914 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 36b52736-6955-4380-9306-77006530e262 00:12:11.914 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:11.914 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:11.914 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:11.914 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:11.914 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:12:11.914 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:11.914 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:12:11.914 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:11.914 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:11.914 rmmod nvme_tcp 00:12:12.174 rmmod nvme_fabrics 00:12:12.174 rmmod nvme_keyring 00:12:12.174 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:12.174 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:12:12.174 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:12:12.174 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1441202 ']' 00:12:12.174 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1441202 00:12:12.174 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1441202 ']' 00:12:12.174 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1441202 00:12:12.174 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:12:12.174 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.174 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1441202 00:12:12.174 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.174 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.174 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1441202' 00:12:12.174 killing process with pid 1441202 00:12:12.174 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1441202 00:12:12.174 14:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1441202 00:12:12.434 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:12.434 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:12.434 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:12.434 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:12:12.434 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:12:12.434 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:12.434 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:12:12.434 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:12.434 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:12.434 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.434 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.434 14:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.341 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:14.341 00:12:14.341 real 0m22.718s 00:12:14.341 user 1m5.320s 00:12:14.341 sys 0m7.774s 00:12:14.341 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.341 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:14.341 ************************************ 00:12:14.341 END TEST nvmf_lvol 00:12:14.341 ************************************ 00:12:14.341 14:31:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:14.341 14:31:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:14.341 14:31:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.341 14:31:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:14.602 ************************************ 00:12:14.602 START TEST nvmf_lvs_grow 00:12:14.602 ************************************ 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:14.602 * Looking for test storage... 00:12:14.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:14.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.602 --rc genhtml_branch_coverage=1 00:12:14.602 --rc genhtml_function_coverage=1 00:12:14.602 --rc genhtml_legend=1 00:12:14.602 --rc geninfo_all_blocks=1 00:12:14.602 --rc geninfo_unexecuted_blocks=1 00:12:14.602 00:12:14.602 ' 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:14.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.602 --rc genhtml_branch_coverage=1 00:12:14.602 --rc genhtml_function_coverage=1 00:12:14.602 --rc genhtml_legend=1 00:12:14.602 --rc geninfo_all_blocks=1 00:12:14.602 --rc geninfo_unexecuted_blocks=1 00:12:14.602 00:12:14.602 ' 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:14.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.602 --rc genhtml_branch_coverage=1 00:12:14.602 --rc genhtml_function_coverage=1 00:12:14.602 --rc genhtml_legend=1 00:12:14.602 --rc geninfo_all_blocks=1 00:12:14.602 --rc geninfo_unexecuted_blocks=1 00:12:14.602 00:12:14.602 ' 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:14.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.602 --rc genhtml_branch_coverage=1 00:12:14.602 --rc genhtml_function_coverage=1 00:12:14.602 --rc genhtml_legend=1 00:12:14.602 --rc geninfo_all_blocks=1 00:12:14.602 --rc geninfo_unexecuted_blocks=1 00:12:14.602 00:12:14.602 ' 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.602 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:14.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:12:14.603 14:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.174 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:21.175 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:21.175 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:21.175 Found net devices under 0000:86:00.0: cvl_0_0 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:21.175 Found net devices under 0000:86:00.1: cvl_0_1 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:21.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:12:21.175 00:12:21.175 --- 10.0.0.2 ping statistics --- 00:12:21.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.175 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:12:21.175 00:12:21.175 --- 10.0.0.1 ping statistics --- 00:12:21.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.175 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1447084 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:21.175 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1447084 00:12:21.176 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1447084 ']' 00:12:21.176 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.176 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.176 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.176 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.176 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:21.176 [2024-11-20 14:31:32.636490] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:12:21.176 [2024-11-20 14:31:32.636537] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.176 [2024-11-20 14:31:32.719033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.176 [2024-11-20 14:31:32.760094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.176 [2024-11-20 14:31:32.760131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.176 [2024-11-20 14:31:32.760138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.176 [2024-11-20 14:31:32.760144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.176 [2024-11-20 14:31:32.760149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.176 [2024-11-20 14:31:32.760687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.176 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.176 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:12:21.176 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:21.176 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:21.176 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:21.176 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.176 14:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:21.176 [2024-11-20 14:31:33.068564] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.176 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:21.176 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:21.176 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.176 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:21.176 ************************************ 00:12:21.176 START TEST lvs_grow_clean 00:12:21.176 ************************************ 00:12:21.176 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:12:21.176 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:21.176 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:21.176 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:21.176 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:21.176 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:21.176 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:21.176 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:21.176 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:21.176 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:21.435 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:21.435 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:21.693 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ba561269-8700-4a01-8747-c72040e5a2e5 00:12:21.693 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba561269-8700-4a01-8747-c72040e5a2e5 00:12:21.693 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:21.952 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:21.952 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:21.952 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ba561269-8700-4a01-8747-c72040e5a2e5 lvol 150 00:12:22.211 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c8e62feb-5d5c-43ab-a924-4896124f134e 00:12:22.211 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:22.211 14:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:22.211 [2024-11-20 14:31:34.121686] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:22.211 [2024-11-20 14:31:34.121736] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:22.211 true 00:12:22.211 14:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:22.211 14:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba561269-8700-4a01-8747-c72040e5a2e5 00:12:22.470 14:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:22.470 14:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:22.728 14:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c8e62feb-5d5c-43ab-a924-4896124f134e 00:12:22.987 14:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:22.987 [2024-11-20 14:31:34.872042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.987 14:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:23.246 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1447582 00:12:23.246 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:23.246 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:23.246 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1447582 /var/tmp/bdevperf.sock 00:12:23.246 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1447582 ']' 00:12:23.246 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:23.246 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.246 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:23.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:23.246 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.246 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:23.246 [2024-11-20 14:31:35.108760] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:12:23.246 [2024-11-20 14:31:35.108811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1447582 ] 00:12:23.246 [2024-11-20 14:31:35.183914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.505 [2024-11-20 14:31:35.227200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.505 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:23.505 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:12:23.505 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:23.764 Nvme0n1 00:12:23.764 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:24.023 [ 00:12:24.023 { 00:12:24.023 "name": "Nvme0n1", 00:12:24.023 "aliases": [ 00:12:24.023 "c8e62feb-5d5c-43ab-a924-4896124f134e" 00:12:24.023 ], 00:12:24.023 "product_name": "NVMe disk", 00:12:24.023 "block_size": 4096, 00:12:24.023 "num_blocks": 38912, 00:12:24.023 "uuid": "c8e62feb-5d5c-43ab-a924-4896124f134e", 00:12:24.023 "numa_id": 1, 00:12:24.023 "assigned_rate_limits": { 00:12:24.023 "rw_ios_per_sec": 0, 00:12:24.023 "rw_mbytes_per_sec": 0, 00:12:24.023 "r_mbytes_per_sec": 0, 00:12:24.023 "w_mbytes_per_sec": 0 00:12:24.023 }, 00:12:24.023 "claimed": false, 00:12:24.023 "zoned": false, 00:12:24.023 "supported_io_types": { 00:12:24.023 "read": true, 00:12:24.023 "write": true, 00:12:24.023 "unmap": true, 00:12:24.023 "flush": true, 00:12:24.023 "reset": true, 00:12:24.023 "nvme_admin": true, 00:12:24.023 "nvme_io": true, 00:12:24.023 "nvme_io_md": false, 00:12:24.023 "write_zeroes": true, 00:12:24.023 "zcopy": false, 00:12:24.023 "get_zone_info": false, 00:12:24.023 "zone_management": false, 00:12:24.023 "zone_append": false, 00:12:24.023 "compare": true, 00:12:24.023 "compare_and_write": true, 00:12:24.023 "abort": true, 00:12:24.023 "seek_hole": false, 00:12:24.023 "seek_data": false, 00:12:24.023 "copy": true, 00:12:24.023 "nvme_iov_md": false 00:12:24.023 }, 00:12:24.023 "memory_domains": [ 00:12:24.023 { 00:12:24.023 "dma_device_id": "system", 00:12:24.023 "dma_device_type": 1 00:12:24.023 } 00:12:24.023 ], 00:12:24.023 "driver_specific": { 00:12:24.023 "nvme": [ 00:12:24.023 { 00:12:24.023 "trid": { 00:12:24.023 "trtype": "TCP", 00:12:24.023 "adrfam": "IPv4", 00:12:24.023 "traddr": "10.0.0.2", 00:12:24.023 "trsvcid": "4420", 00:12:24.023 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:24.023 }, 00:12:24.023 "ctrlr_data": { 00:12:24.023 "cntlid": 1, 00:12:24.023 "vendor_id": "0x8086", 00:12:24.023 "model_number": "SPDK bdev Controller", 00:12:24.023 "serial_number": "SPDK0", 00:12:24.023 "firmware_revision": "25.01", 00:12:24.023 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:24.023 "oacs": { 00:12:24.023 "security": 0, 00:12:24.023 "format": 0, 00:12:24.023 "firmware": 0, 00:12:24.023 "ns_manage": 0 00:12:24.023 }, 00:12:24.023 "multi_ctrlr": true, 00:12:24.023 "ana_reporting": false 00:12:24.023 }, 00:12:24.023 "vs": { 00:12:24.023 "nvme_version": "1.3" 00:12:24.023 }, 00:12:24.023 "ns_data": { 00:12:24.023 "id": 1, 00:12:24.023 "can_share": true 00:12:24.023 } 00:12:24.023 } 00:12:24.023 ], 00:12:24.023 "mp_policy": "active_passive" 00:12:24.023 } 00:12:24.023 } 00:12:24.023 ] 00:12:24.023 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1447604 00:12:24.023 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:24.023 14:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:24.023 Running I/O for 10 seconds... 00:12:24.960 Latency(us) 00:12:24.960 [2024-11-20T13:31:36.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.960 Nvme0n1 : 1.00 22436.00 87.64 0.00 0.00 0.00 0.00 0.00 00:12:24.960 [2024-11-20T13:31:36.918Z] =================================================================================================================== 00:12:24.960 [2024-11-20T13:31:36.918Z] Total : 22436.00 87.64 0.00 0.00 0.00 0.00 0.00 00:12:24.960 00:12:25.897 14:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ba561269-8700-4a01-8747-c72040e5a2e5 00:12:26.155 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:26.155 Nvme0n1 : 2.00 22529.00 88.00 0.00 0.00 0.00 0.00 0.00 00:12:26.155 [2024-11-20T13:31:38.113Z] =================================================================================================================== 00:12:26.155 [2024-11-20T13:31:38.113Z] Total : 22529.00 88.00 0.00 0.00 0.00 0.00 0.00 00:12:26.155 00:12:26.155 true 00:12:26.155 14:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba561269-8700-4a01-8747-c72040e5a2e5 00:12:26.155 14:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:26.414 14:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:26.414 14:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:26.414 14:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1447604 00:12:26.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:26.982 Nvme0n1 : 3.00 22534.33 88.02 0.00 0.00 0.00 0.00 0.00 00:12:26.982 [2024-11-20T13:31:38.940Z] =================================================================================================================== 00:12:26.982 [2024-11-20T13:31:38.940Z] Total : 22534.33 88.02 0.00 0.00 0.00 0.00 0.00 00:12:26.982 00:12:28.358 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:28.358 Nvme0n1 : 4.00 22586.75 88.23 0.00 0.00 0.00 0.00 0.00 00:12:28.358 [2024-11-20T13:31:40.316Z] =================================================================================================================== 00:12:28.358 [2024-11-20T13:31:40.316Z] Total : 22586.75 88.23 0.00 0.00 0.00 0.00 0.00 00:12:28.358 00:12:29.294 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:29.294 Nvme0n1 : 5.00 22606.60 88.31 0.00 0.00 0.00 0.00 0.00 00:12:29.294 [2024-11-20T13:31:41.252Z] =================================================================================================================== 00:12:29.294 [2024-11-20T13:31:41.252Z] Total : 22606.60 88.31 0.00 0.00 0.00 0.00 0.00 00:12:29.294 00:12:30.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:30.231 Nvme0n1 : 6.00 22642.00 88.45 0.00 0.00 0.00 0.00 0.00 00:12:30.231 [2024-11-20T13:31:42.189Z] =================================================================================================================== 00:12:30.231 [2024-11-20T13:31:42.189Z] Total : 22642.00 88.45 0.00 0.00 0.00 0.00 0.00 00:12:30.231 00:12:31.168 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:31.168 Nvme0n1 : 7.00 22678.71 88.59 0.00 0.00 0.00 0.00 0.00 00:12:31.168 [2024-11-20T13:31:43.126Z] =================================================================================================================== 00:12:31.168 [2024-11-20T13:31:43.126Z] Total : 22678.71 88.59 0.00 0.00 0.00 0.00 0.00 00:12:31.168 00:12:32.104 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:32.104 Nvme0n1 : 8.00 22705.25 88.69 0.00 0.00 0.00 0.00 0.00 00:12:32.104 [2024-11-20T13:31:44.062Z] =================================================================================================================== 00:12:32.104 [2024-11-20T13:31:44.062Z] Total : 22705.25 88.69 0.00 0.00 0.00 0.00 0.00 00:12:32.104 00:12:33.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:33.040 Nvme0n1 : 9.00 22723.22 88.76 0.00 0.00 0.00 0.00 0.00 00:12:33.040 [2024-11-20T13:31:44.998Z] =================================================================================================================== 00:12:33.040 [2024-11-20T13:31:44.998Z] Total : 22723.22 88.76 0.00 0.00 0.00 0.00 0.00 00:12:33.040 00:12:34.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:34.419 Nvme0n1 : 10.00 22736.60 88.81 0.00 0.00 0.00 0.00 0.00 00:12:34.419 [2024-11-20T13:31:46.377Z] =================================================================================================================== 00:12:34.419 [2024-11-20T13:31:46.377Z] Total : 22736.60 88.81 0.00 0.00 0.00 0.00 0.00 00:12:34.419 00:12:34.419 00:12:34.419 Latency(us) 00:12:34.419 [2024-11-20T13:31:46.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:34.419 Nvme0n1 : 10.00 22738.62 88.82 0.00 0.00 5626.02 2137.04 10314.80 00:12:34.419 [2024-11-20T13:31:46.377Z] =================================================================================================================== 00:12:34.419 [2024-11-20T13:31:46.377Z] Total : 22738.62 88.82 0.00 0.00 5626.02 2137.04 10314.80 00:12:34.419 { 00:12:34.419 "results": [ 00:12:34.419 { 00:12:34.419 "job": "Nvme0n1", 00:12:34.419 "core_mask": "0x2", 00:12:34.419 "workload": "randwrite", 00:12:34.419 "status": "finished", 00:12:34.419 "queue_depth": 128, 00:12:34.419 "io_size": 4096, 00:12:34.419 "runtime": 10.004742, 00:12:34.419 "iops": 22738.617347653744, 00:12:34.419 "mibps": 88.82272401427244, 00:12:34.419 "io_failed": 0, 00:12:34.419 "io_timeout": 0, 00:12:34.419 "avg_latency_us": 5626.0188838004715, 00:12:34.419 "min_latency_us": 2137.0434782608695, 00:12:34.419 "max_latency_us": 10314.79652173913 00:12:34.419 } 00:12:34.419 ], 00:12:34.419 "core_count": 1 00:12:34.419 } 00:12:34.419 14:31:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1447582 00:12:34.419 14:31:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1447582 ']' 00:12:34.419 14:31:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1447582 00:12:34.419 14:31:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:12:34.419 14:31:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.419 14:31:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1447582 00:12:34.419 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:34.419 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:34.419 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1447582' 00:12:34.419 killing process with pid 1447582 00:12:34.419 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1447582 00:12:34.419 Received shutdown signal, test time was about 10.000000 seconds 00:12:34.419 00:12:34.419 Latency(us) 00:12:34.419 [2024-11-20T13:31:46.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.419 [2024-11-20T13:31:46.377Z] =================================================================================================================== 00:12:34.419 [2024-11-20T13:31:46.377Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:34.419 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1447582 00:12:34.419 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:34.419 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:34.677 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba561269-8700-4a01-8747-c72040e5a2e5 00:12:34.677 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:34.935 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:34.935 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:34.935 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:35.193 [2024-11-20 14:31:46.947683] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:35.193 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba561269-8700-4a01-8747-c72040e5a2e5 00:12:35.193 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:12:35.193 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba561269-8700-4a01-8747-c72040e5a2e5 00:12:35.193 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:35.193 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:35.193 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:35.193 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:35.193 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:35.193 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:35.193 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:35.194 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:35.194 14:31:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba561269-8700-4a01-8747-c72040e5a2e5 00:12:35.453 request: 00:12:35.453 { 00:12:35.453 "uuid": "ba561269-8700-4a01-8747-c72040e5a2e5", 00:12:35.453 "method": "bdev_lvol_get_lvstores", 00:12:35.453 "req_id": 1 00:12:35.453 } 00:12:35.453 Got JSON-RPC error response 00:12:35.453 response: 00:12:35.453 { 00:12:35.453 "code": -19, 00:12:35.453 "message": "No such device" 00:12:35.453 } 00:12:35.453 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:12:35.453 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:35.453 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:35.453 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:35.453 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:35.453 aio_bdev 00:12:35.453 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c8e62feb-5d5c-43ab-a924-4896124f134e 00:12:35.453 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=c8e62feb-5d5c-43ab-a924-4896124f134e 00:12:35.453 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.453 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:12:35.453 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.453 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.453 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:35.712 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c8e62feb-5d5c-43ab-a924-4896124f134e -t 2000 00:12:35.971 [ 00:12:35.971 { 00:12:35.971 "name": "c8e62feb-5d5c-43ab-a924-4896124f134e", 00:12:35.971 "aliases": [ 00:12:35.971 "lvs/lvol" 00:12:35.971 ], 00:12:35.971 "product_name": "Logical Volume", 00:12:35.971 "block_size": 4096, 00:12:35.971 "num_blocks": 38912, 00:12:35.971 "uuid": "c8e62feb-5d5c-43ab-a924-4896124f134e", 00:12:35.972 "assigned_rate_limits": { 00:12:35.972 "rw_ios_per_sec": 0, 00:12:35.972 "rw_mbytes_per_sec": 0, 00:12:35.972 "r_mbytes_per_sec": 0, 00:12:35.972 "w_mbytes_per_sec": 0 00:12:35.972 }, 00:12:35.972 "claimed": false, 00:12:35.972 "zoned": false, 00:12:35.972 "supported_io_types": { 00:12:35.972 "read": true, 00:12:35.972 "write": true, 00:12:35.972 "unmap": true, 00:12:35.972 "flush": false, 00:12:35.972 "reset": true, 00:12:35.972 "nvme_admin": false, 00:12:35.972 "nvme_io": false, 00:12:35.972 "nvme_io_md": false, 00:12:35.972 "write_zeroes": true, 00:12:35.972 "zcopy": false, 00:12:35.972 "get_zone_info": false, 00:12:35.972 "zone_management": false, 00:12:35.972 "zone_append": false, 00:12:35.972 "compare": false, 00:12:35.972 "compare_and_write": false, 00:12:35.972 "abort": false, 00:12:35.972 "seek_hole": true, 00:12:35.972 "seek_data": true, 00:12:35.972 "copy": false, 00:12:35.972 "nvme_iov_md": false 00:12:35.972 }, 00:12:35.972 "driver_specific": { 00:12:35.972 "lvol": { 00:12:35.972 "lvol_store_uuid": "ba561269-8700-4a01-8747-c72040e5a2e5", 00:12:35.972 "base_bdev": "aio_bdev", 00:12:35.972 "thin_provision": false, 00:12:35.972 "num_allocated_clusters": 38, 00:12:35.972 "snapshot": false, 00:12:35.972 "clone": false, 00:12:35.972 "esnap_clone": false 00:12:35.972 } 00:12:35.972 } 00:12:35.972 } 00:12:35.972 ] 00:12:35.972 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:12:35.972 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba561269-8700-4a01-8747-c72040e5a2e5 00:12:35.972 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:35.972 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:35.972 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:35.972 14:31:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba561269-8700-4a01-8747-c72040e5a2e5 00:12:36.231 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:36.231 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c8e62feb-5d5c-43ab-a924-4896124f134e 00:12:36.490 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ba561269-8700-4a01-8747-c72040e5a2e5 00:12:36.749 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:36.749 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:37.007 00:12:37.007 real 0m15.606s 00:12:37.007 user 0m15.201s 00:12:37.007 sys 0m1.421s 00:12:37.007 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.007 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:37.007 ************************************ 00:12:37.007 END TEST lvs_grow_clean 00:12:37.007 ************************************ 00:12:37.007 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:37.007 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.007 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.007 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:37.007 ************************************ 00:12:37.007 START TEST lvs_grow_dirty 00:12:37.007 ************************************ 00:12:37.007 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:12:37.007 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:37.007 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:37.007 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:37.007 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:37.007 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:37.007 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:37.008 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:37.008 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:37.008 14:31:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:37.266 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:37.266 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:37.266 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=bc07dfe2-50a2-4eb9-883e-14f715e1bbaf 00:12:37.266 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc07dfe2-50a2-4eb9-883e-14f715e1bbaf 00:12:37.266 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:37.524 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:37.524 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:37.524 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bc07dfe2-50a2-4eb9-883e-14f715e1bbaf lvol 150 00:12:37.782 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1d4db315-82f3-46b5-8584-0052e8291c2c 00:12:37.782 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:37.782 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:38.041 [2024-11-20 14:31:49.750853] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:38.041 [2024-11-20 14:31:49.750902] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:38.041 true 00:12:38.041 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc07dfe2-50a2-4eb9-883e-14f715e1bbaf 00:12:38.041 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:38.041 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:38.041 14:31:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:38.300 14:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1d4db315-82f3-46b5-8584-0052e8291c2c 00:12:38.559 14:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:38.559 [2024-11-20 14:31:50.505102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.818 14:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:38.818 14:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1450196 00:12:38.818 14:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:38.818 14:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:38.818 14:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1450196 /var/tmp/bdevperf.sock 00:12:38.818 14:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1450196 ']' 00:12:38.818 14:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:38.818 14:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.818 14:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:38.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:38.818 14:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.818 14:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:38.818 [2024-11-20 14:31:50.757051] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:12:38.818 [2024-11-20 14:31:50.757100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1450196 ] 00:12:39.077 [2024-11-20 14:31:50.830031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.077 [2024-11-20 14:31:50.884576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.077 14:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.077 14:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:12:39.077 14:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:39.645 Nvme0n1 00:12:39.645 14:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:39.645 [ 00:12:39.645 { 00:12:39.645 "name": "Nvme0n1", 00:12:39.645 "aliases": [ 00:12:39.645 "1d4db315-82f3-46b5-8584-0052e8291c2c" 00:12:39.645 ], 00:12:39.645 "product_name": "NVMe disk", 00:12:39.645 "block_size": 4096, 00:12:39.645 "num_blocks": 38912, 00:12:39.645 "uuid": "1d4db315-82f3-46b5-8584-0052e8291c2c", 00:12:39.645 "numa_id": 1, 00:12:39.645 "assigned_rate_limits": { 00:12:39.645 "rw_ios_per_sec": 0, 00:12:39.645 "rw_mbytes_per_sec": 0, 00:12:39.645 "r_mbytes_per_sec": 0, 00:12:39.645 "w_mbytes_per_sec": 0 00:12:39.645 }, 00:12:39.645 "claimed": false, 00:12:39.645 "zoned": false, 00:12:39.645 "supported_io_types": { 00:12:39.645 "read": true, 00:12:39.645 "write": true, 00:12:39.645 "unmap": true, 00:12:39.645 "flush": true, 00:12:39.645 "reset": true, 00:12:39.645 "nvme_admin": true, 00:12:39.646 "nvme_io": true, 00:12:39.646 "nvme_io_md": false, 00:12:39.646 "write_zeroes": true, 00:12:39.646 "zcopy": false, 00:12:39.646 "get_zone_info": false, 00:12:39.646 "zone_management": false, 00:12:39.646 "zone_append": false, 00:12:39.646 "compare": true, 00:12:39.646 "compare_and_write": true, 00:12:39.646 "abort": true, 00:12:39.646 "seek_hole": false, 00:12:39.646 "seek_data": false, 00:12:39.646 "copy": true, 00:12:39.646 "nvme_iov_md": false 00:12:39.646 }, 00:12:39.646 "memory_domains": [ 00:12:39.646 { 00:12:39.646 "dma_device_id": "system", 00:12:39.646 "dma_device_type": 1 00:12:39.646 } 00:12:39.646 ], 00:12:39.646 "driver_specific": { 00:12:39.646 "nvme": [ 00:12:39.646 { 00:12:39.646 "trid": { 00:12:39.646 "trtype": "TCP", 00:12:39.646 "adrfam": "IPv4", 00:12:39.646 "traddr": "10.0.0.2", 00:12:39.646 "trsvcid": "4420", 00:12:39.646 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:39.646 }, 00:12:39.646 "ctrlr_data": { 00:12:39.646 "cntlid": 1, 00:12:39.646 "vendor_id": "0x8086", 00:12:39.646 "model_number": "SPDK bdev Controller", 00:12:39.646 "serial_number": "SPDK0", 00:12:39.646 "firmware_revision": "25.01", 00:12:39.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:39.646 "oacs": { 00:12:39.646 "security": 0, 00:12:39.646 "format": 0, 00:12:39.646 "firmware": 0, 00:12:39.646 "ns_manage": 0 00:12:39.646 }, 00:12:39.646 "multi_ctrlr": true, 00:12:39.646 "ana_reporting": false 00:12:39.646 }, 00:12:39.646 "vs": { 00:12:39.646 "nvme_version": "1.3" 00:12:39.646 }, 00:12:39.646 "ns_data": { 00:12:39.646 "id": 1, 00:12:39.646 "can_share": true 00:12:39.646 } 00:12:39.646 } 00:12:39.646 ], 00:12:39.646 "mp_policy": "active_passive" 00:12:39.646 } 00:12:39.646 } 00:12:39.646 ] 00:12:39.646 14:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1450416 00:12:39.646 14:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:39.646 14:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:39.904 Running I/O for 10 seconds... 00:12:40.842 Latency(us) 00:12:40.842 [2024-11-20T13:31:52.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:40.842 Nvme0n1 : 1.00 22357.00 87.33 0.00 0.00 0.00 0.00 0.00 00:12:40.842 [2024-11-20T13:31:52.800Z] =================================================================================================================== 00:12:40.842 [2024-11-20T13:31:52.800Z] Total : 22357.00 87.33 0.00 0.00 0.00 0.00 0.00 00:12:40.842 00:12:41.825 14:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bc07dfe2-50a2-4eb9-883e-14f715e1bbaf 00:12:41.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:41.825 Nvme0n1 : 2.00 22554.50 88.10 0.00 0.00 0.00 0.00 0.00 00:12:41.825 [2024-11-20T13:31:53.784Z] =================================================================================================================== 00:12:41.826 [2024-11-20T13:31:53.784Z] Total : 22554.50 88.10 0.00 0.00 0.00 0.00 0.00 00:12:41.826 00:12:41.826 true 00:12:42.083 14:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc07dfe2-50a2-4eb9-883e-14f715e1bbaf 00:12:42.083 14:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:42.083 14:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:42.083 14:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:42.083 14:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1450416 00:12:43.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:43.016 Nvme0n1 : 3.00 22614.00 88.34 0.00 0.00 0.00 0.00 0.00 00:12:43.016 [2024-11-20T13:31:54.974Z] =================================================================================================================== 00:12:43.016 [2024-11-20T13:31:54.974Z] Total : 22614.00 88.34 0.00 0.00 0.00 0.00 0.00 00:12:43.016 00:12:43.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:43.952 Nvme0n1 : 4.00 22657.00 88.50 0.00 0.00 0.00 0.00 0.00 00:12:43.952 [2024-11-20T13:31:55.910Z] =================================================================================================================== 00:12:43.952 [2024-11-20T13:31:55.910Z] Total : 22657.00 88.50 0.00 0.00 0.00 0.00 0.00 00:12:43.952 00:12:44.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:44.971 Nvme0n1 : 5.00 22692.00 88.64 0.00 0.00 0.00 0.00 0.00 00:12:44.971 [2024-11-20T13:31:56.929Z] =================================================================================================================== 00:12:44.971 [2024-11-20T13:31:56.929Z] Total : 22692.00 88.64 0.00 0.00 0.00 0.00 0.00 00:12:44.971 00:12:45.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:45.976 Nvme0n1 : 6.00 22699.83 88.67 0.00 0.00 0.00 0.00 0.00 00:12:45.976 [2024-11-20T13:31:57.934Z] =================================================================================================================== 00:12:45.976 [2024-11-20T13:31:57.934Z] Total : 22699.83 88.67 0.00 0.00 0.00 0.00 0.00 00:12:45.976 00:12:46.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:46.912 Nvme0n1 : 7.00 22723.14 88.76 0.00 0.00 0.00 0.00 0.00 00:12:46.912 [2024-11-20T13:31:58.870Z] =================================================================================================================== 00:12:46.912 [2024-11-20T13:31:58.870Z] Total : 22723.14 88.76 0.00 0.00 0.00 0.00 0.00 00:12:46.912 00:12:47.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:47.859 Nvme0n1 : 8.00 22748.00 88.86 0.00 0.00 0.00 0.00 0.00 00:12:47.859 [2024-11-20T13:31:59.817Z] =================================================================================================================== 00:12:47.859 [2024-11-20T13:31:59.817Z] Total : 22748.00 88.86 0.00 0.00 0.00 0.00 0.00 00:12:47.859 00:12:48.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:48.794 Nvme0n1 : 9.00 22746.56 88.85 0.00 0.00 0.00 0.00 0.00 00:12:48.794 [2024-11-20T13:32:00.752Z] =================================================================================================================== 00:12:48.794 [2024-11-20T13:32:00.752Z] Total : 22746.56 88.85 0.00 0.00 0.00 0.00 0.00 00:12:48.794 00:12:49.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:49.728 Nvme0n1 : 10.00 22758.90 88.90 0.00 0.00 0.00 0.00 0.00 00:12:49.728 [2024-11-20T13:32:01.686Z] =================================================================================================================== 00:12:49.728 [2024-11-20T13:32:01.686Z] Total : 22758.90 88.90 0.00 0.00 0.00 0.00 0.00 00:12:49.728 00:12:49.987 00:12:49.987 Latency(us) 00:12:49.987 [2024-11-20T13:32:01.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:49.987 Nvme0n1 : 10.00 22764.29 88.92 0.00 0.00 5619.87 3305.29 12651.30 00:12:49.987 [2024-11-20T13:32:01.945Z] =================================================================================================================== 00:12:49.987 [2024-11-20T13:32:01.945Z] Total : 22764.29 88.92 0.00 0.00 5619.87 3305.29 12651.30 00:12:49.987 { 00:12:49.987 "results": [ 00:12:49.987 { 00:12:49.987 "job": "Nvme0n1", 00:12:49.987 "core_mask": "0x2", 00:12:49.987 "workload": "randwrite", 00:12:49.987 "status": "finished", 00:12:49.987 "queue_depth": 128, 00:12:49.987 "io_size": 4096, 00:12:49.987 "runtime": 10.003257, 00:12:49.987 "iops": 22764.28567215658, 00:12:49.987 "mibps": 88.92299090686164, 00:12:49.987 "io_failed": 0, 00:12:49.987 "io_timeout": 0, 00:12:49.987 "avg_latency_us": 5619.873781915807, 00:12:49.987 "min_latency_us": 3305.2939130434784, 00:12:49.987 "max_latency_us": 12651.297391304348 00:12:49.987 } 00:12:49.987 ], 00:12:49.987 "core_count": 1 00:12:49.987 } 00:12:49.987 14:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1450196 00:12:49.987 14:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1450196 ']' 00:12:49.987 14:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1450196 00:12:49.987 14:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:12:49.987 14:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.987 14:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1450196 00:12:49.987 14:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:49.987 14:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:49.987 14:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1450196' 00:12:49.987 killing process with pid 1450196 00:12:49.987 14:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1450196 00:12:49.987 Received shutdown signal, test time was about 10.000000 seconds 00:12:49.987 00:12:49.987 Latency(us) 00:12:49.987 [2024-11-20T13:32:01.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.987 [2024-11-20T13:32:01.945Z] =================================================================================================================== 00:12:49.987 [2024-11-20T13:32:01.945Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:49.987 14:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1450196 00:12:49.987 14:32:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:50.246 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:50.504 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc07dfe2-50a2-4eb9-883e-14f715e1bbaf 00:12:50.504 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:50.762 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:50.762 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:50.762 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1447084 00:12:50.762 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1447084 00:12:50.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1447084 Killed "${NVMF_APP[@]}" "$@" 00:12:50.762 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:50.763 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:50.763 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:50.763 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:50.763 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:50.763 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1452288 00:12:50.763 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1452288 00:12:50.763 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:50.763 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1452288 ']' 00:12:50.763 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.763 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.763 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.763 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.763 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:50.763 [2024-11-20 14:32:02.655809] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:12:50.763 [2024-11-20 14:32:02.655858] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.021 [2024-11-20 14:32:02.737372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.021 [2024-11-20 14:32:02.778402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.022 [2024-11-20 14:32:02.778438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.022 [2024-11-20 14:32:02.778445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.022 [2024-11-20 14:32:02.778451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.022 [2024-11-20 14:32:02.778456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.022 [2024-11-20 14:32:02.779050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.022 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.022 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:12:51.022 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:51.022 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:51.022 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:51.022 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.022 14:32:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:51.280 [2024-11-20 14:32:03.086332] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:51.281 [2024-11-20 14:32:03.086434] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:51.281 [2024-11-20 14:32:03.086460] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:51.281 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:51.281 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1d4db315-82f3-46b5-8584-0052e8291c2c 00:12:51.281 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1d4db315-82f3-46b5-8584-0052e8291c2c 00:12:51.281 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:51.281 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:12:51.281 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:51.281 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:51.281 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:51.540 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1d4db315-82f3-46b5-8584-0052e8291c2c -t 2000 00:12:51.540 [ 00:12:51.540 { 00:12:51.540 "name": "1d4db315-82f3-46b5-8584-0052e8291c2c", 00:12:51.540 "aliases": [ 00:12:51.540 "lvs/lvol" 00:12:51.540 ], 00:12:51.540 "product_name": "Logical Volume", 00:12:51.540 "block_size": 4096, 00:12:51.540 "num_blocks": 38912, 00:12:51.540 "uuid": "1d4db315-82f3-46b5-8584-0052e8291c2c", 00:12:51.540 "assigned_rate_limits": { 00:12:51.540 "rw_ios_per_sec": 0, 00:12:51.540 "rw_mbytes_per_sec": 0, 00:12:51.540 "r_mbytes_per_sec": 0, 00:12:51.540 "w_mbytes_per_sec": 0 00:12:51.540 }, 00:12:51.540 "claimed": false, 00:12:51.540 "zoned": false, 00:12:51.540 "supported_io_types": { 00:12:51.540 "read": true, 00:12:51.540 "write": true, 00:12:51.540 "unmap": true, 00:12:51.540 "flush": false, 00:12:51.540 "reset": true, 00:12:51.540 "nvme_admin": false, 00:12:51.540 "nvme_io": false, 00:12:51.540 "nvme_io_md": false, 00:12:51.540 "write_zeroes": true, 00:12:51.540 "zcopy": false, 00:12:51.540 "get_zone_info": false, 00:12:51.540 "zone_management": false, 00:12:51.540 "zone_append": false, 00:12:51.540 "compare": false, 00:12:51.540 "compare_and_write": false, 00:12:51.540 "abort": false, 00:12:51.540 "seek_hole": true, 00:12:51.540 "seek_data": true, 00:12:51.540 "copy": false, 00:12:51.540 "nvme_iov_md": false 00:12:51.540 }, 00:12:51.540 "driver_specific": { 00:12:51.540 "lvol": { 00:12:51.540 "lvol_store_uuid": "bc07dfe2-50a2-4eb9-883e-14f715e1bbaf", 00:12:51.540 "base_bdev": "aio_bdev", 00:12:51.540 "thin_provision": false, 00:12:51.540 "num_allocated_clusters": 38, 00:12:51.540 "snapshot": false, 00:12:51.540 "clone": false, 00:12:51.540 "esnap_clone": false 00:12:51.540 } 00:12:51.540 } 00:12:51.540 } 00:12:51.540 ] 00:12:51.540 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:12:51.540 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc07dfe2-50a2-4eb9-883e-14f715e1bbaf 00:12:51.540 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:51.799 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:51.799 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc07dfe2-50a2-4eb9-883e-14f715e1bbaf 00:12:51.799 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:52.058 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:52.058 14:32:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:52.317 [2024-11-20 14:32:04.071251] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:52.317 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc07dfe2-50a2-4eb9-883e-14f715e1bbaf 00:12:52.317 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:12:52.317 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc07dfe2-50a2-4eb9-883e-14f715e1bbaf 00:12:52.317 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.317 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:52.317 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.317 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:52.317 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.317 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:52.317 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.317 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:52.317 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc07dfe2-50a2-4eb9-883e-14f715e1bbaf 00:12:52.576 request: 00:12:52.576 { 00:12:52.576 "uuid": "bc07dfe2-50a2-4eb9-883e-14f715e1bbaf", 00:12:52.576 "method": "bdev_lvol_get_lvstores", 00:12:52.576 "req_id": 1 00:12:52.576 } 00:12:52.576 Got JSON-RPC error response 00:12:52.576 response: 00:12:52.576 { 00:12:52.576 "code": -19, 00:12:52.576 "message": "No such device" 00:12:52.576 } 00:12:52.576 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:12:52.576 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:52.576 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:52.576 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:52.576 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:52.576 aio_bdev 00:12:52.576 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1d4db315-82f3-46b5-8584-0052e8291c2c 00:12:52.576 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1d4db315-82f3-46b5-8584-0052e8291c2c 00:12:52.576 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:52.576 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:12:52.576 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:52.576 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:52.576 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:52.835 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1d4db315-82f3-46b5-8584-0052e8291c2c -t 2000 00:12:53.094 [ 00:12:53.094 { 00:12:53.094 "name": "1d4db315-82f3-46b5-8584-0052e8291c2c", 00:12:53.094 "aliases": [ 00:12:53.094 "lvs/lvol" 00:12:53.094 ], 00:12:53.094 "product_name": "Logical Volume", 00:12:53.094 "block_size": 4096, 00:12:53.094 "num_blocks": 38912, 00:12:53.094 "uuid": "1d4db315-82f3-46b5-8584-0052e8291c2c", 00:12:53.094 "assigned_rate_limits": { 00:12:53.094 "rw_ios_per_sec": 0, 00:12:53.094 "rw_mbytes_per_sec": 0, 00:12:53.094 "r_mbytes_per_sec": 0, 00:12:53.094 "w_mbytes_per_sec": 0 00:12:53.094 }, 00:12:53.094 "claimed": false, 00:12:53.094 "zoned": false, 00:12:53.094 "supported_io_types": { 00:12:53.094 "read": true, 00:12:53.094 "write": true, 00:12:53.094 "unmap": true, 00:12:53.094 "flush": false, 00:12:53.094 "reset": true, 00:12:53.094 "nvme_admin": false, 00:12:53.094 "nvme_io": false, 00:12:53.094 "nvme_io_md": false, 00:12:53.094 "write_zeroes": true, 00:12:53.094 "zcopy": false, 00:12:53.094 "get_zone_info": false, 00:12:53.094 "zone_management": false, 00:12:53.094 "zone_append": false, 00:12:53.094 "compare": false, 00:12:53.094 "compare_and_write": false, 00:12:53.094 "abort": false, 00:12:53.094 "seek_hole": true, 00:12:53.094 "seek_data": true, 00:12:53.094 "copy": false, 00:12:53.094 "nvme_iov_md": false 00:12:53.094 }, 00:12:53.094 "driver_specific": { 00:12:53.094 "lvol": { 00:12:53.094 "lvol_store_uuid": "bc07dfe2-50a2-4eb9-883e-14f715e1bbaf", 00:12:53.094 "base_bdev": "aio_bdev", 00:12:53.094 "thin_provision": false, 00:12:53.094 "num_allocated_clusters": 38, 00:12:53.094 "snapshot": false, 00:12:53.094 "clone": false, 00:12:53.094 "esnap_clone": false 00:12:53.094 } 00:12:53.094 } 00:12:53.094 } 00:12:53.094 ] 00:12:53.094 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:12:53.094 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc07dfe2-50a2-4eb9-883e-14f715e1bbaf 00:12:53.094 14:32:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:53.094 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:53.094 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc07dfe2-50a2-4eb9-883e-14f715e1bbaf 00:12:53.094 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:53.352 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:53.352 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1d4db315-82f3-46b5-8584-0052e8291c2c 00:12:53.611 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bc07dfe2-50a2-4eb9-883e-14f715e1bbaf 00:12:53.869 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:53.869 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:54.128 00:12:54.128 real 0m17.059s 00:12:54.128 user 0m43.838s 00:12:54.128 sys 0m4.000s 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:54.128 ************************************ 00:12:54.128 END TEST lvs_grow_dirty 00:12:54.128 ************************************ 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:54.128 nvmf_trace.0 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:54.128 rmmod nvme_tcp 00:12:54.128 rmmod nvme_fabrics 00:12:54.128 rmmod nvme_keyring 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:12:54.128 14:32:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1452288 ']' 00:12:54.128 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1452288 00:12:54.128 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1452288 ']' 00:12:54.128 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1452288 00:12:54.128 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:12:54.128 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.128 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1452288 00:12:54.128 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.128 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.128 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1452288' 00:12:54.128 killing process with pid 1452288 00:12:54.128 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1452288 00:12:54.128 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1452288 00:12:54.388 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:54.388 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:54.388 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:54.388 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:12:54.388 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:12:54.388 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:54.388 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:12:54.388 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:54.388 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:54.388 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.388 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.388 14:32:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.355 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:56.355 00:12:56.355 real 0m41.957s 00:12:56.355 user 1m4.741s 00:12:56.355 sys 0m10.357s 00:12:56.355 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.355 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:56.355 ************************************ 00:12:56.355 END TEST nvmf_lvs_grow 00:12:56.355 ************************************ 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:56.615 ************************************ 00:12:56.615 START TEST nvmf_bdev_io_wait 00:12:56.615 ************************************ 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:56.615 * Looking for test storage... 00:12:56.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:56.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.615 --rc genhtml_branch_coverage=1 00:12:56.615 --rc genhtml_function_coverage=1 00:12:56.615 --rc genhtml_legend=1 00:12:56.615 --rc geninfo_all_blocks=1 00:12:56.615 --rc geninfo_unexecuted_blocks=1 00:12:56.615 00:12:56.615 ' 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:56.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.615 --rc genhtml_branch_coverage=1 00:12:56.615 --rc genhtml_function_coverage=1 00:12:56.615 --rc genhtml_legend=1 00:12:56.615 --rc geninfo_all_blocks=1 00:12:56.615 --rc geninfo_unexecuted_blocks=1 00:12:56.615 00:12:56.615 ' 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:56.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.615 --rc genhtml_branch_coverage=1 00:12:56.615 --rc genhtml_function_coverage=1 00:12:56.615 --rc genhtml_legend=1 00:12:56.615 --rc geninfo_all_blocks=1 00:12:56.615 --rc geninfo_unexecuted_blocks=1 00:12:56.615 00:12:56.615 ' 00:12:56.615 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:56.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.615 --rc genhtml_branch_coverage=1 00:12:56.616 --rc genhtml_function_coverage=1 00:12:56.616 --rc genhtml_legend=1 00:12:56.616 --rc geninfo_all_blocks=1 00:12:56.616 --rc geninfo_unexecuted_blocks=1 00:12:56.616 00:12:56.616 ' 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:56.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:56.616 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:56.876 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:56.876 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:56.876 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:56.876 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:56.876 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.876 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:56.876 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:56.876 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:56.876 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.876 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.876 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.876 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:56.876 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:56.876 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:12:56.876 14:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:03.446 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:03.446 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:03.446 Found net devices under 0000:86:00.0: cvl_0_0 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:03.446 Found net devices under 0000:86:00.1: cvl_0_1 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:03.446 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:03.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:13:03.447 00:13:03.447 --- 10.0.0.2 ping statistics --- 00:13:03.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.447 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:13:03.447 00:13:03.447 --- 10.0.0.1 ping statistics --- 00:13:03.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.447 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1456352 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1456352 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1456352 ']' 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:03.447 [2024-11-20 14:32:14.616061] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:13:03.447 [2024-11-20 14:32:14.616110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.447 [2024-11-20 14:32:14.693090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.447 [2024-11-20 14:32:14.735733] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.447 [2024-11-20 14:32:14.735770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.447 [2024-11-20 14:32:14.735778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.447 [2024-11-20 14:32:14.735786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.447 [2024-11-20 14:32:14.735809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.447 [2024-11-20 14:32:14.737271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.447 [2024-11-20 14:32:14.737382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.447 [2024-11-20 14:32:14.737466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.447 [2024-11-20 14:32:14.737467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:03.447 [2024-11-20 14:32:14.878227] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:03.447 Malloc0 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:03.447 [2024-11-20 14:32:14.925814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1456391 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1456393 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:03.447 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:03.447 { 00:13:03.447 "params": { 00:13:03.447 "name": "Nvme$subsystem", 00:13:03.447 "trtype": "$TEST_TRANSPORT", 00:13:03.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:03.447 "adrfam": "ipv4", 00:13:03.448 "trsvcid": "$NVMF_PORT", 00:13:03.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:03.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:03.448 "hdgst": ${hdgst:-false}, 00:13:03.448 "ddgst": ${ddgst:-false} 00:13:03.448 }, 00:13:03.448 "method": "bdev_nvme_attach_controller" 00:13:03.448 } 00:13:03.448 EOF 00:13:03.448 )") 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1456395 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:03.448 { 00:13:03.448 "params": { 00:13:03.448 "name": "Nvme$subsystem", 00:13:03.448 "trtype": "$TEST_TRANSPORT", 00:13:03.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:03.448 "adrfam": "ipv4", 00:13:03.448 "trsvcid": "$NVMF_PORT", 00:13:03.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:03.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:03.448 "hdgst": ${hdgst:-false}, 00:13:03.448 "ddgst": ${ddgst:-false} 00:13:03.448 }, 00:13:03.448 "method": "bdev_nvme_attach_controller" 00:13:03.448 } 00:13:03.448 EOF 00:13:03.448 )") 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1456398 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:03.448 { 00:13:03.448 "params": { 00:13:03.448 "name": "Nvme$subsystem", 00:13:03.448 "trtype": "$TEST_TRANSPORT", 00:13:03.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:03.448 "adrfam": "ipv4", 00:13:03.448 "trsvcid": "$NVMF_PORT", 00:13:03.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:03.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:03.448 "hdgst": ${hdgst:-false}, 00:13:03.448 "ddgst": ${ddgst:-false} 00:13:03.448 }, 00:13:03.448 "method": "bdev_nvme_attach_controller" 00:13:03.448 } 00:13:03.448 EOF 00:13:03.448 )") 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:03.448 { 00:13:03.448 "params": { 00:13:03.448 "name": "Nvme$subsystem", 00:13:03.448 "trtype": "$TEST_TRANSPORT", 00:13:03.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:03.448 "adrfam": "ipv4", 00:13:03.448 "trsvcid": "$NVMF_PORT", 00:13:03.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:03.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:03.448 "hdgst": ${hdgst:-false}, 00:13:03.448 "ddgst": ${ddgst:-false} 00:13:03.448 }, 00:13:03.448 "method": "bdev_nvme_attach_controller" 00:13:03.448 } 00:13:03.448 EOF 00:13:03.448 )") 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1456391 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:03.448 "params": { 00:13:03.448 "name": "Nvme1", 00:13:03.448 "trtype": "tcp", 00:13:03.448 "traddr": "10.0.0.2", 00:13:03.448 "adrfam": "ipv4", 00:13:03.448 "trsvcid": "4420", 00:13:03.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:03.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:03.448 "hdgst": false, 00:13:03.448 "ddgst": false 00:13:03.448 }, 00:13:03.448 "method": "bdev_nvme_attach_controller" 00:13:03.448 }' 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:03.448 "params": { 00:13:03.448 "name": "Nvme1", 00:13:03.448 "trtype": "tcp", 00:13:03.448 "traddr": "10.0.0.2", 00:13:03.448 "adrfam": "ipv4", 00:13:03.448 "trsvcid": "4420", 00:13:03.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:03.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:03.448 "hdgst": false, 00:13:03.448 "ddgst": false 00:13:03.448 }, 00:13:03.448 "method": "bdev_nvme_attach_controller" 00:13:03.448 }' 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:03.448 "params": { 00:13:03.448 "name": "Nvme1", 00:13:03.448 "trtype": "tcp", 00:13:03.448 "traddr": "10.0.0.2", 00:13:03.448 "adrfam": "ipv4", 00:13:03.448 "trsvcid": "4420", 00:13:03.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:03.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:03.448 "hdgst": false, 00:13:03.448 "ddgst": false 00:13:03.448 }, 00:13:03.448 "method": "bdev_nvme_attach_controller" 00:13:03.448 }' 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:03.448 14:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:03.448 "params": { 00:13:03.448 "name": "Nvme1", 00:13:03.448 "trtype": "tcp", 00:13:03.448 "traddr": "10.0.0.2", 00:13:03.448 "adrfam": "ipv4", 00:13:03.448 "trsvcid": "4420", 00:13:03.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:03.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:03.448 "hdgst": false, 00:13:03.448 "ddgst": false 00:13:03.448 }, 00:13:03.448 "method": "bdev_nvme_attach_controller" 00:13:03.448 }' 00:13:03.448 [2024-11-20 14:32:14.977563] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:13:03.448 [2024-11-20 14:32:14.977610] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:03.448 [2024-11-20 14:32:14.977661] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:13:03.448 [2024-11-20 14:32:14.977702] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:03.448 [2024-11-20 14:32:14.981042] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:13:03.448 [2024-11-20 14:32:14.981042] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:13:03.448 [2024-11-20 14:32:14.981090] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 14:32:14.981093] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:03.448 --proc-type=auto ] 00:13:03.448 [2024-11-20 14:32:15.175384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.448 [2024-11-20 14:32:15.218504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:03.448 [2024-11-20 14:32:15.268721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.448 [2024-11-20 14:32:15.311836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:03.448 [2024-11-20 14:32:15.369509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.707 [2024-11-20 14:32:15.413478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.707 [2024-11-20 14:32:15.426921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:03.707 [2024-11-20 14:32:15.456472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:13:03.707 Running I/O for 1 seconds... 00:13:03.707 Running I/O for 1 seconds... 00:13:03.966 Running I/O for 1 seconds... 00:13:03.966 Running I/O for 1 seconds... 00:13:04.901 12085.00 IOPS, 47.21 MiB/s 00:13:04.901 Latency(us) 00:13:04.901 [2024-11-20T13:32:16.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.901 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:04.901 Nvme1n1 : 1.01 12140.79 47.42 0.00 0.00 10507.09 5442.34 15386.71 00:13:04.901 [2024-11-20T13:32:16.859Z] =================================================================================================================== 00:13:04.901 [2024-11-20T13:32:16.859Z] Total : 12140.79 47.42 0.00 0.00 10507.09 5442.34 15386.71 00:13:04.901 10903.00 IOPS, 42.59 MiB/s 00:13:04.901 Latency(us) 00:13:04.901 [2024-11-20T13:32:16.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.901 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:04.901 Nvme1n1 : 1.01 10973.93 42.87 0.00 0.00 11628.06 4359.57 22795.13 00:13:04.901 [2024-11-20T13:32:16.859Z] =================================================================================================================== 00:13:04.901 [2024-11-20T13:32:16.859Z] Total : 10973.93 42.87 0.00 0.00 11628.06 4359.57 22795.13 00:13:04.901 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1456393 00:13:04.901 10228.00 IOPS, 39.95 MiB/s 00:13:04.901 Latency(us) 00:13:04.901 [2024-11-20T13:32:16.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.901 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:04.901 Nvme1n1 : 1.01 10300.81 40.24 0.00 0.00 12387.27 4559.03 21655.37 00:13:04.901 [2024-11-20T13:32:16.859Z] =================================================================================================================== 00:13:04.901 [2024-11-20T13:32:16.859Z] Total : 10300.81 40.24 0.00 0.00 12387.27 4559.03 21655.37 00:13:04.901 236128.00 IOPS, 922.38 MiB/s 00:13:04.901 Latency(us) 00:13:04.901 [2024-11-20T13:32:16.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.901 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:04.901 Nvme1n1 : 1.00 235761.14 920.94 0.00 0.00 539.93 229.73 1552.92 00:13:04.901 [2024-11-20T13:32:16.859Z] =================================================================================================================== 00:13:04.901 [2024-11-20T13:32:16.859Z] Total : 235761.14 920.94 0.00 0.00 539.93 229.73 1552.92 00:13:04.901 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1456395 00:13:04.901 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1456398 00:13:04.901 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.901 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.901 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:04.901 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.901 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:04.901 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:04.901 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:04.901 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:13:04.901 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:04.901 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:13:04.901 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:04.901 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:04.901 rmmod nvme_tcp 00:13:05.160 rmmod nvme_fabrics 00:13:05.160 rmmod nvme_keyring 00:13:05.160 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:05.160 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:13:05.160 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:13:05.160 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1456352 ']' 00:13:05.160 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1456352 00:13:05.160 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1456352 ']' 00:13:05.160 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1456352 00:13:05.160 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:13:05.160 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.160 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1456352 00:13:05.160 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:05.160 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:05.160 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1456352' 00:13:05.160 killing process with pid 1456352 00:13:05.160 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1456352 00:13:05.160 14:32:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1456352 00:13:05.160 14:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:05.160 14:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:05.160 14:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:05.160 14:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:13:05.420 14:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:13:05.420 14:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:05.420 14:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:13:05.420 14:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:05.420 14:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:05.420 14:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.420 14:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.420 14:32:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.343 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:07.343 00:13:07.343 real 0m10.829s 00:13:07.343 user 0m16.261s 00:13:07.343 sys 0m6.274s 00:13:07.343 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.343 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:07.343 ************************************ 00:13:07.343 END TEST nvmf_bdev_io_wait 00:13:07.343 ************************************ 00:13:07.343 14:32:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:07.343 14:32:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:07.343 14:32:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.343 14:32:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:07.343 ************************************ 00:13:07.343 START TEST nvmf_queue_depth 00:13:07.343 ************************************ 00:13:07.343 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:07.602 * Looking for test storage... 00:13:07.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:07.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.602 --rc genhtml_branch_coverage=1 00:13:07.602 --rc genhtml_function_coverage=1 00:13:07.602 --rc genhtml_legend=1 00:13:07.602 --rc geninfo_all_blocks=1 00:13:07.602 --rc geninfo_unexecuted_blocks=1 00:13:07.602 00:13:07.602 ' 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:07.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.602 --rc genhtml_branch_coverage=1 00:13:07.602 --rc genhtml_function_coverage=1 00:13:07.602 --rc genhtml_legend=1 00:13:07.602 --rc geninfo_all_blocks=1 00:13:07.602 --rc geninfo_unexecuted_blocks=1 00:13:07.602 00:13:07.602 ' 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:07.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.602 --rc genhtml_branch_coverage=1 00:13:07.602 --rc genhtml_function_coverage=1 00:13:07.602 --rc genhtml_legend=1 00:13:07.602 --rc geninfo_all_blocks=1 00:13:07.602 --rc geninfo_unexecuted_blocks=1 00:13:07.602 00:13:07.602 ' 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:07.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.602 --rc genhtml_branch_coverage=1 00:13:07.602 --rc genhtml_function_coverage=1 00:13:07.602 --rc genhtml_legend=1 00:13:07.602 --rc geninfo_all_blocks=1 00:13:07.602 --rc geninfo_unexecuted_blocks=1 00:13:07.602 00:13:07.602 ' 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.602 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:13:07.603 14:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:14.176 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:14.176 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:14.176 Found net devices under 0000:86:00.0: cvl_0_0 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:14.176 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:14.177 Found net devices under 0000:86:00.1: cvl_0_1 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:14.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:13:14.177 00:13:14.177 --- 10.0.0.2 ping statistics --- 00:13:14.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.177 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:14.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:13:14.177 00:13:14.177 --- 10.0.0.1 ping statistics --- 00:13:14.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.177 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1460382 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1460382 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1460382 ']' 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:14.177 [2024-11-20 14:32:25.542486] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:13:14.177 [2024-11-20 14:32:25.542542] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.177 [2024-11-20 14:32:25.625146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.177 [2024-11-20 14:32:25.665157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.177 [2024-11-20 14:32:25.665204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.177 [2024-11-20 14:32:25.665211] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.177 [2024-11-20 14:32:25.665217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.177 [2024-11-20 14:32:25.665222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.177 [2024-11-20 14:32:25.665751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:14.177 [2024-11-20 14:32:25.809987] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:14.177 Malloc0 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:14.177 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.178 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:14.178 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.178 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:14.178 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.178 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.178 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.178 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:14.178 [2024-11-20 14:32:25.860268] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.178 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.178 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1460410 00:13:14.178 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:14.178 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:14.178 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1460410 /var/tmp/bdevperf.sock 00:13:14.178 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1460410 ']' 00:13:14.178 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:14.178 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.178 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:14.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:14.178 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.178 14:32:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:14.178 [2024-11-20 14:32:25.911702] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:13:14.178 [2024-11-20 14:32:25.911745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1460410 ] 00:13:14.178 [2024-11-20 14:32:25.987819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.178 [2024-11-20 14:32:26.030414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.178 14:32:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.178 14:32:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:13:14.178 14:32:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:14.178 14:32:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.178 14:32:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:14.435 NVMe0n1 00:13:14.435 14:32:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.435 14:32:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:14.693 Running I/O for 10 seconds... 00:13:16.558 11409.00 IOPS, 44.57 MiB/s [2024-11-20T13:32:29.888Z] 11776.50 IOPS, 46.00 MiB/s [2024-11-20T13:32:30.822Z] 11941.00 IOPS, 46.64 MiB/s [2024-11-20T13:32:31.754Z] 11883.50 IOPS, 46.42 MiB/s [2024-11-20T13:32:32.689Z] 11904.60 IOPS, 46.50 MiB/s [2024-11-20T13:32:33.621Z] 11946.17 IOPS, 46.66 MiB/s [2024-11-20T13:32:34.553Z] 11993.43 IOPS, 46.85 MiB/s [2024-11-20T13:32:35.487Z] 12025.38 IOPS, 46.97 MiB/s [2024-11-20T13:32:36.860Z] 12048.22 IOPS, 47.06 MiB/s [2024-11-20T13:32:36.860Z] 12060.90 IOPS, 47.11 MiB/s 00:13:24.902 Latency(us) 00:13:24.902 [2024-11-20T13:32:36.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.902 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:24.902 Verification LBA range: start 0x0 length 0x4000 00:13:24.902 NVMe0n1 : 10.07 12075.23 47.17 0.00 0.00 84525.25 19717.79 55620.12 00:13:24.902 [2024-11-20T13:32:36.860Z] =================================================================================================================== 00:13:24.902 [2024-11-20T13:32:36.860Z] Total : 12075.23 47.17 0.00 0.00 84525.25 19717.79 55620.12 00:13:24.902 { 00:13:24.902 "results": [ 00:13:24.902 { 00:13:24.902 "job": "NVMe0n1", 00:13:24.902 "core_mask": "0x1", 00:13:24.902 "workload": "verify", 00:13:24.902 "status": "finished", 00:13:24.902 "verify_range": { 00:13:24.902 "start": 0, 00:13:24.902 "length": 16384 00:13:24.902 }, 00:13:24.902 "queue_depth": 1024, 00:13:24.902 "io_size": 4096, 00:13:24.902 "runtime": 10.072935, 00:13:24.902 "iops": 12075.229314991113, 00:13:24.902 "mibps": 47.168864511684035, 00:13:24.902 "io_failed": 0, 00:13:24.902 "io_timeout": 0, 00:13:24.902 "avg_latency_us": 84525.25417926128, 00:13:24.902 "min_latency_us": 19717.787826086955, 00:13:24.902 "max_latency_us": 55620.11826086957 00:13:24.902 } 00:13:24.902 ], 00:13:24.902 "core_count": 1 00:13:24.902 } 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1460410 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1460410 ']' 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1460410 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1460410 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1460410' 00:13:24.902 killing process with pid 1460410 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1460410 00:13:24.902 Received shutdown signal, test time was about 10.000000 seconds 00:13:24.902 00:13:24.902 Latency(us) 00:13:24.902 [2024-11-20T13:32:36.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.902 [2024-11-20T13:32:36.860Z] =================================================================================================================== 00:13:24.902 [2024-11-20T13:32:36.860Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1460410 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:24.902 rmmod nvme_tcp 00:13:24.902 rmmod nvme_fabrics 00:13:24.902 rmmod nvme_keyring 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1460382 ']' 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1460382 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1460382 ']' 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1460382 00:13:24.902 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:13:25.162 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.162 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1460382 00:13:25.162 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:25.162 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:25.162 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1460382' 00:13:25.162 killing process with pid 1460382 00:13:25.162 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1460382 00:13:25.162 14:32:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1460382 00:13:25.162 14:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:25.162 14:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:25.162 14:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:25.162 14:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:13:25.162 14:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:13:25.162 14:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:25.162 14:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:13:25.162 14:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:25.162 14:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:25.162 14:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.162 14:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:25.162 14:32:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:27.699 00:13:27.699 real 0m19.894s 00:13:27.699 user 0m23.363s 00:13:27.699 sys 0m6.074s 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:27.699 ************************************ 00:13:27.699 END TEST nvmf_queue_depth 00:13:27.699 ************************************ 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:27.699 ************************************ 00:13:27.699 START TEST nvmf_target_multipath 00:13:27.699 ************************************ 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:27.699 * Looking for test storage... 00:13:27.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:13:27.699 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:27.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.700 --rc genhtml_branch_coverage=1 00:13:27.700 --rc genhtml_function_coverage=1 00:13:27.700 --rc genhtml_legend=1 00:13:27.700 --rc geninfo_all_blocks=1 00:13:27.700 --rc geninfo_unexecuted_blocks=1 00:13:27.700 00:13:27.700 ' 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:27.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.700 --rc genhtml_branch_coverage=1 00:13:27.700 --rc genhtml_function_coverage=1 00:13:27.700 --rc genhtml_legend=1 00:13:27.700 --rc geninfo_all_blocks=1 00:13:27.700 --rc geninfo_unexecuted_blocks=1 00:13:27.700 00:13:27.700 ' 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:27.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.700 --rc genhtml_branch_coverage=1 00:13:27.700 --rc genhtml_function_coverage=1 00:13:27.700 --rc genhtml_legend=1 00:13:27.700 --rc geninfo_all_blocks=1 00:13:27.700 --rc geninfo_unexecuted_blocks=1 00:13:27.700 00:13:27.700 ' 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:27.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.700 --rc genhtml_branch_coverage=1 00:13:27.700 --rc genhtml_function_coverage=1 00:13:27.700 --rc genhtml_legend=1 00:13:27.700 --rc geninfo_all_blocks=1 00:13:27.700 --rc geninfo_unexecuted_blocks=1 00:13:27.700 00:13:27.700 ' 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:27.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:13:27.700 14:32:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:34.271 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:34.271 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:34.271 Found net devices under 0000:86:00.0: cvl_0_0 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:34.271 Found net devices under 0000:86:00.1: cvl_0_1 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.271 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:34.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:13:34.272 00:13:34.272 --- 10.0.0.2 ping statistics --- 00:13:34.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.272 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:13:34.272 00:13:34.272 --- 10.0.0.1 ping statistics --- 00:13:34.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.272 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:34.272 only one NIC for nvmf test 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:34.272 rmmod nvme_tcp 00:13:34.272 rmmod nvme_fabrics 00:13:34.272 rmmod nvme_keyring 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:34.272 14:32:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:36.180 00:13:36.180 real 0m8.424s 00:13:36.180 user 0m1.834s 00:13:36.180 sys 0m4.622s 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:36.180 ************************************ 00:13:36.180 END TEST nvmf_target_multipath 00:13:36.180 ************************************ 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:36.180 ************************************ 00:13:36.180 START TEST nvmf_zcopy 00:13:36.180 ************************************ 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:36.180 * Looking for test storage... 00:13:36.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:36.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.180 --rc genhtml_branch_coverage=1 00:13:36.180 --rc genhtml_function_coverage=1 00:13:36.180 --rc genhtml_legend=1 00:13:36.180 --rc geninfo_all_blocks=1 00:13:36.180 --rc geninfo_unexecuted_blocks=1 00:13:36.180 00:13:36.180 ' 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:36.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.180 --rc genhtml_branch_coverage=1 00:13:36.180 --rc genhtml_function_coverage=1 00:13:36.180 --rc genhtml_legend=1 00:13:36.180 --rc geninfo_all_blocks=1 00:13:36.180 --rc geninfo_unexecuted_blocks=1 00:13:36.180 00:13:36.180 ' 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:36.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.180 --rc genhtml_branch_coverage=1 00:13:36.180 --rc genhtml_function_coverage=1 00:13:36.180 --rc genhtml_legend=1 00:13:36.180 --rc geninfo_all_blocks=1 00:13:36.180 --rc geninfo_unexecuted_blocks=1 00:13:36.180 00:13:36.180 ' 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:36.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.180 --rc genhtml_branch_coverage=1 00:13:36.180 --rc genhtml_function_coverage=1 00:13:36.180 --rc genhtml_legend=1 00:13:36.180 --rc geninfo_all_blocks=1 00:13:36.180 --rc geninfo_unexecuted_blocks=1 00:13:36.180 00:13:36.180 ' 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.180 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:36.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:13:36.181 14:32:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:42.880 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:42.880 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:42.880 Found net devices under 0000:86:00.0: cvl_0_0 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.880 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:42.881 Found net devices under 0000:86:00.1: cvl_0_1 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:42.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:13:42.881 00:13:42.881 --- 10.0.0.2 ping statistics --- 00:13:42.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.881 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:13:42.881 00:13:42.881 --- 10.0.0.1 ping statistics --- 00:13:42.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.881 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1469305 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1469305 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1469305 ']' 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.881 14:32:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:42.881 [2024-11-20 14:32:53.971093] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:13:42.881 [2024-11-20 14:32:53.971140] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.881 [2024-11-20 14:32:54.049511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.881 [2024-11-20 14:32:54.091612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.881 [2024-11-20 14:32:54.091648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.881 [2024-11-20 14:32:54.091655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.881 [2024-11-20 14:32:54.091661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.881 [2024-11-20 14:32:54.091666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.881 [2024-11-20 14:32:54.092203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:42.881 [2024-11-20 14:32:54.241118] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:42.881 [2024-11-20 14:32:54.261309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:42.881 malloc0 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.881 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:42.882 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:42.882 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:13:42.882 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:13:42.882 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:42.882 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:42.882 { 00:13:42.882 "params": { 00:13:42.882 "name": "Nvme$subsystem", 00:13:42.882 "trtype": "$TEST_TRANSPORT", 00:13:42.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:42.882 "adrfam": "ipv4", 00:13:42.882 "trsvcid": "$NVMF_PORT", 00:13:42.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:42.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:42.882 "hdgst": ${hdgst:-false}, 00:13:42.882 "ddgst": ${ddgst:-false} 00:13:42.882 }, 00:13:42.882 "method": "bdev_nvme_attach_controller" 00:13:42.882 } 00:13:42.882 EOF 00:13:42.882 )") 00:13:42.882 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:13:42.882 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:13:42.882 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:13:42.882 14:32:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:42.882 "params": { 00:13:42.882 "name": "Nvme1", 00:13:42.882 "trtype": "tcp", 00:13:42.882 "traddr": "10.0.0.2", 00:13:42.882 "adrfam": "ipv4", 00:13:42.882 "trsvcid": "4420", 00:13:42.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:42.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:42.882 "hdgst": false, 00:13:42.882 "ddgst": false 00:13:42.882 }, 00:13:42.882 "method": "bdev_nvme_attach_controller" 00:13:42.882 }' 00:13:42.882 [2024-11-20 14:32:54.347662] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:13:42.882 [2024-11-20 14:32:54.347707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469414 ] 00:13:42.882 [2024-11-20 14:32:54.423029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.882 [2024-11-20 14:32:54.464294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.882 Running I/O for 10 seconds... 00:13:45.221 8430.00 IOPS, 65.86 MiB/s [2024-11-20T13:32:58.114Z] 8517.00 IOPS, 66.54 MiB/s [2024-11-20T13:32:59.050Z] 8529.33 IOPS, 66.64 MiB/s [2024-11-20T13:33:00.013Z] 8549.25 IOPS, 66.79 MiB/s [2024-11-20T13:33:00.947Z] 8562.40 IOPS, 66.89 MiB/s [2024-11-20T13:33:01.880Z] 8538.67 IOPS, 66.71 MiB/s [2024-11-20T13:33:02.815Z] 8541.86 IOPS, 66.73 MiB/s [2024-11-20T13:33:04.188Z] 8548.75 IOPS, 66.79 MiB/s [2024-11-20T13:33:05.123Z] 8555.56 IOPS, 66.84 MiB/s [2024-11-20T13:33:05.123Z] 8560.70 IOPS, 66.88 MiB/s 00:13:53.165 Latency(us) 00:13:53.165 [2024-11-20T13:33:05.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.165 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:53.165 Verification LBA range: start 0x0 length 0x1000 00:13:53.165 Nvme1n1 : 10.01 8564.67 66.91 0.00 0.00 14902.07 2108.55 23251.03 00:13:53.165 [2024-11-20T13:33:05.123Z] =================================================================================================================== 00:13:53.165 [2024-11-20T13:33:05.123Z] Total : 8564.67 66.91 0.00 0.00 14902.07 2108.55 23251.03 00:13:53.165 14:33:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1471299 00:13:53.165 14:33:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:53.165 14:33:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:53.165 14:33:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:53.165 14:33:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:53.165 14:33:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:13:53.165 14:33:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:13:53.165 14:33:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:53.165 14:33:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:53.165 { 00:13:53.165 "params": { 00:13:53.165 "name": "Nvme$subsystem", 00:13:53.165 "trtype": "$TEST_TRANSPORT", 00:13:53.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:53.165 "adrfam": "ipv4", 00:13:53.165 "trsvcid": "$NVMF_PORT", 00:13:53.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:53.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:53.165 "hdgst": ${hdgst:-false}, 00:13:53.165 "ddgst": ${ddgst:-false} 00:13:53.165 }, 00:13:53.165 "method": "bdev_nvme_attach_controller" 00:13:53.165 } 00:13:53.165 EOF 00:13:53.165 )") 00:13:53.165 14:33:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:13:53.165 [2024-11-20 14:33:04.993084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.165 [2024-11-20 14:33:04.993120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.165 14:33:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:13:53.165 14:33:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:13:53.165 14:33:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:53.165 "params": { 00:13:53.165 "name": "Nvme1", 00:13:53.165 "trtype": "tcp", 00:13:53.165 "traddr": "10.0.0.2", 00:13:53.165 "adrfam": "ipv4", 00:13:53.165 "trsvcid": "4420", 00:13:53.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:53.165 "hdgst": false, 00:13:53.165 "ddgst": false 00:13:53.165 }, 00:13:53.165 "method": "bdev_nvme_attach_controller" 00:13:53.165 }' 00:13:53.165 [2024-11-20 14:33:05.005075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.165 [2024-11-20 14:33:05.005088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.165 [2024-11-20 14:33:05.017110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.165 [2024-11-20 14:33:05.017124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.165 [2024-11-20 14:33:05.029136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.165 [2024-11-20 14:33:05.029148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.165 [2024-11-20 14:33:05.031770] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:13:53.165 [2024-11-20 14:33:05.031816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471299 ] 00:13:53.165 [2024-11-20 14:33:05.041166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.165 [2024-11-20 14:33:05.041176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.165 [2024-11-20 14:33:05.053195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.165 [2024-11-20 14:33:05.053205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.165 [2024-11-20 14:33:05.065229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.165 [2024-11-20 14:33:05.065240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.165 [2024-11-20 14:33:05.077263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.165 [2024-11-20 14:33:05.077278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.165 [2024-11-20 14:33:05.089297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.165 [2024-11-20 14:33:05.089308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.165 [2024-11-20 14:33:05.101330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.165 [2024-11-20 14:33:05.101340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.165 [2024-11-20 14:33:05.106813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.165 [2024-11-20 14:33:05.113361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.165 [2024-11-20 14:33:05.113372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.125419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.125446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.137428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.137441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.149436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.424 [2024-11-20 14:33:05.149457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.149467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.161502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.161516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.173528] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.173551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.185561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.185578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.197591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.197606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.209622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.209635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.221654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.221667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.233681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.233694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.245739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.245760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.257756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.257770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.269792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.269808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.281823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.281837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.293845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.293861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.305877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.305888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.317913] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.317924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.329952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.329967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.341989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.342000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.354014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.354026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.366051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.366065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.424 [2024-11-20 14:33:05.378090] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.424 [2024-11-20 14:33:05.378107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 [2024-11-20 14:33:05.390123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.390141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 [2024-11-20 14:33:05.402146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.402157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 [2024-11-20 14:33:05.414179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.414191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 [2024-11-20 14:33:05.426261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.426281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 Running I/O for 5 seconds... 00:13:53.683 [2024-11-20 14:33:05.438285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.438297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 [2024-11-20 14:33:05.453400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.453426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 [2024-11-20 14:33:05.467608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.467629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 [2024-11-20 14:33:05.481896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.481920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 [2024-11-20 14:33:05.495896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.495916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 [2024-11-20 14:33:05.504944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.504970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 [2024-11-20 14:33:05.519596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.519617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 [2024-11-20 14:33:05.533809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.533829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 [2024-11-20 14:33:05.548015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.548035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 [2024-11-20 14:33:05.557341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.557361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 [2024-11-20 14:33:05.571621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.571641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 [2024-11-20 14:33:05.580556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.580574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 [2024-11-20 14:33:05.594860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.594879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 [2024-11-20 14:33:05.608860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.608880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 [2024-11-20 14:33:05.622816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.622836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.683 [2024-11-20 14:33:05.637017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.683 [2024-11-20 14:33:05.637038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.647500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.647522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.656976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.656995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.671453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.671473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.685142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.685163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.699621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.699642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.708927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.708953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.723289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.723308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.737363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.737382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.747119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.747149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.761897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.761916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.773125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.773145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.783019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.783037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.797194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.797213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.811430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.811450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.825541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.825561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.834713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.834732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.844050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.844069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.858549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.858568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.872580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.872600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.886650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.886670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.944 [2024-11-20 14:33:05.900669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.944 [2024-11-20 14:33:05.900688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.203 [2024-11-20 14:33:05.910261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.203 [2024-11-20 14:33:05.910281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.203 [2024-11-20 14:33:05.924872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.203 [2024-11-20 14:33:05.924892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.203 [2024-11-20 14:33:05.938692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.203 [2024-11-20 14:33:05.938712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.203 [2024-11-20 14:33:05.953056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.203 [2024-11-20 14:33:05.953076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.203 [2024-11-20 14:33:05.964437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.203 [2024-11-20 14:33:05.964457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.203 [2024-11-20 14:33:05.978916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.203 [2024-11-20 14:33:05.978935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.203 [2024-11-20 14:33:05.992845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.203 [2024-11-20 14:33:05.992865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.203 [2024-11-20 14:33:06.006639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.203 [2024-11-20 14:33:06.006659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.203 [2024-11-20 14:33:06.020694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.203 [2024-11-20 14:33:06.020714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.203 [2024-11-20 14:33:06.035034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.203 [2024-11-20 14:33:06.035054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.203 [2024-11-20 14:33:06.049295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.203 [2024-11-20 14:33:06.049315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.203 [2024-11-20 14:33:06.060362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.203 [2024-11-20 14:33:06.060381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.203 [2024-11-20 14:33:06.075465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.203 [2024-11-20 14:33:06.075484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.203 [2024-11-20 14:33:06.090358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.203 [2024-11-20 14:33:06.090377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.203 [2024-11-20 14:33:06.099842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.203 [2024-11-20 14:33:06.099861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.203 [2024-11-20 14:33:06.114478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.203 [2024-11-20 14:33:06.114497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.203 [2024-11-20 14:33:06.125838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.204 [2024-11-20 14:33:06.125858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.204 [2024-11-20 14:33:06.140024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.204 [2024-11-20 14:33:06.140044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.204 [2024-11-20 14:33:06.154115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.204 [2024-11-20 14:33:06.154136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.463 [2024-11-20 14:33:06.168603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.463 [2024-11-20 14:33:06.168623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.463 [2024-11-20 14:33:06.179940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.463 [2024-11-20 14:33:06.179967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.463 [2024-11-20 14:33:06.194470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.463 [2024-11-20 14:33:06.194489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.463 [2024-11-20 14:33:06.208513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.463 [2024-11-20 14:33:06.208533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.463 [2024-11-20 14:33:06.222401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.463 [2024-11-20 14:33:06.222420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.463 [2024-11-20 14:33:06.236311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.463 [2024-11-20 14:33:06.236331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.463 [2024-11-20 14:33:06.250197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.463 [2024-11-20 14:33:06.250217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.463 [2024-11-20 14:33:06.263760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.463 [2024-11-20 14:33:06.263779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.463 [2024-11-20 14:33:06.277933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.463 [2024-11-20 14:33:06.277959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.463 [2024-11-20 14:33:06.291937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.463 [2024-11-20 14:33:06.291961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.463 [2024-11-20 14:33:06.306193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.463 [2024-11-20 14:33:06.306212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.463 [2024-11-20 14:33:06.319933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.463 [2024-11-20 14:33:06.319958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.463 [2024-11-20 14:33:06.333973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.463 [2024-11-20 14:33:06.333992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.463 [2024-11-20 14:33:06.347862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.463 [2024-11-20 14:33:06.347882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.463 [2024-11-20 14:33:06.361639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.463 [2024-11-20 14:33:06.361658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.463 [2024-11-20 14:33:06.375443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.463 [2024-11-20 14:33:06.375462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.463 [2024-11-20 14:33:06.388871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.463 [2024-11-20 14:33:06.388890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.463 [2024-11-20 14:33:06.402835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.463 [2024-11-20 14:33:06.402854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.463 [2024-11-20 14:33:06.417231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.463 [2024-11-20 14:33:06.417251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.722 [2024-11-20 14:33:06.427753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.722 [2024-11-20 14:33:06.427773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.722 16493.00 IOPS, 128.85 MiB/s [2024-11-20T13:33:06.680Z] [2024-11-20 14:33:06.441760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.722 [2024-11-20 14:33:06.441779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.722 [2024-11-20 14:33:06.455601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.722 [2024-11-20 14:33:06.455620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.722 [2024-11-20 14:33:06.469738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.722 [2024-11-20 14:33:06.469758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.722 [2024-11-20 14:33:06.483657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.722 [2024-11-20 14:33:06.483677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.722 [2024-11-20 14:33:06.497568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.722 [2024-11-20 14:33:06.497587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.722 [2024-11-20 14:33:06.506531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.722 [2024-11-20 14:33:06.506551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.722 [2024-11-20 14:33:06.516082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.722 [2024-11-20 14:33:06.516110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.722 [2024-11-20 14:33:06.530975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.722 [2024-11-20 14:33:06.530995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.722 [2024-11-20 14:33:06.542251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.722 [2024-11-20 14:33:06.542270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.722 [2024-11-20 14:33:06.556713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.723 [2024-11-20 14:33:06.556732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.723 [2024-11-20 14:33:06.570532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.723 [2024-11-20 14:33:06.570552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.723 [2024-11-20 14:33:06.580152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.723 [2024-11-20 14:33:06.580172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.723 [2024-11-20 14:33:06.594669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.723 [2024-11-20 14:33:06.594690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.723 [2024-11-20 14:33:06.608925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.723 [2024-11-20 14:33:06.608944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.723 [2024-11-20 14:33:06.623082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.723 [2024-11-20 14:33:06.623102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.723 [2024-11-20 14:33:06.634201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.723 [2024-11-20 14:33:06.634220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.723 [2024-11-20 14:33:06.648570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.723 [2024-11-20 14:33:06.648590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.723 [2024-11-20 14:33:06.662118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.723 [2024-11-20 14:33:06.662137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.723 [2024-11-20 14:33:06.676447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.723 [2024-11-20 14:33:06.676466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.687773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.687794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.697289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.697308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.707248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.707269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.721728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.721749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.735331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.735352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.744919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.744938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.759464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.759489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.773603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.773623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.788171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.788191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.803477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.803497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.817600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.817621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.826639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.826660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.836046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.836067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.850977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.850996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.864796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.864815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.879097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.879116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.893191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.893211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.907277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.907299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.920976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.920996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.982 [2024-11-20 14:33:06.935078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:54.982 [2024-11-20 14:33:06.935097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.241 [2024-11-20 14:33:06.948899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.241 [2024-11-20 14:33:06.948920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.241 [2024-11-20 14:33:06.962845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.241 [2024-11-20 14:33:06.962865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.241 [2024-11-20 14:33:06.976781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.241 [2024-11-20 14:33:06.976801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.241 [2024-11-20 14:33:06.990850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.241 [2024-11-20 14:33:06.990871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.241 [2024-11-20 14:33:07.005107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.241 [2024-11-20 14:33:07.005127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.241 [2024-11-20 14:33:07.016244] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.241 [2024-11-20 14:33:07.016268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.241 [2024-11-20 14:33:07.025817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.241 [2024-11-20 14:33:07.025837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.241 [2024-11-20 14:33:07.035245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.241 [2024-11-20 14:33:07.035265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.241 [2024-11-20 14:33:07.050052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.242 [2024-11-20 14:33:07.050072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.242 [2024-11-20 14:33:07.063298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.242 [2024-11-20 14:33:07.063318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.242 [2024-11-20 14:33:07.077709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.242 [2024-11-20 14:33:07.077728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.242 [2024-11-20 14:33:07.091785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.242 [2024-11-20 14:33:07.091804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.242 [2024-11-20 14:33:07.105907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.242 [2024-11-20 14:33:07.105926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.242 [2024-11-20 14:33:07.119993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.242 [2024-11-20 14:33:07.120013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.242 [2024-11-20 14:33:07.134046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.242 [2024-11-20 14:33:07.134065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.242 [2024-11-20 14:33:07.143765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.242 [2024-11-20 14:33:07.143784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.242 [2024-11-20 14:33:07.153203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.242 [2024-11-20 14:33:07.153222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.242 [2024-11-20 14:33:07.162567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.242 [2024-11-20 14:33:07.162586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.242 [2024-11-20 14:33:07.176866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.242 [2024-11-20 14:33:07.176885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.242 [2024-11-20 14:33:07.191219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.242 [2024-11-20 14:33:07.191238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.501 [2024-11-20 14:33:07.201987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.501 [2024-11-20 14:33:07.202006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.501 [2024-11-20 14:33:07.216623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.501 [2024-11-20 14:33:07.216642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.501 [2024-11-20 14:33:07.225969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.501 [2024-11-20 14:33:07.226005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.501 [2024-11-20 14:33:07.240673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.501 [2024-11-20 14:33:07.240693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.501 [2024-11-20 14:33:07.253966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.501 [2024-11-20 14:33:07.253987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.501 [2024-11-20 14:33:07.267997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.501 [2024-11-20 14:33:07.268017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.501 [2024-11-20 14:33:07.282094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.501 [2024-11-20 14:33:07.282113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.501 [2024-11-20 14:33:07.296251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.501 [2024-11-20 14:33:07.296270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.501 [2024-11-20 14:33:07.307723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.501 [2024-11-20 14:33:07.307747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.501 [2024-11-20 14:33:07.322361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.501 [2024-11-20 14:33:07.322380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.501 [2024-11-20 14:33:07.335776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.501 [2024-11-20 14:33:07.335794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.501 [2024-11-20 14:33:07.350079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.501 [2024-11-20 14:33:07.350102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.501 [2024-11-20 14:33:07.358996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.501 [2024-11-20 14:33:07.359015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.501 [2024-11-20 14:33:07.373668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.501 [2024-11-20 14:33:07.373688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.501 [2024-11-20 14:33:07.387567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.501 [2024-11-20 14:33:07.387590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.502 [2024-11-20 14:33:07.398480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.502 [2024-11-20 14:33:07.398498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.502 [2024-11-20 14:33:07.412610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.502 [2024-11-20 14:33:07.412637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.502 [2024-11-20 14:33:07.426955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.502 [2024-11-20 14:33:07.426975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.502 [2024-11-20 14:33:07.437491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.502 [2024-11-20 14:33:07.437509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.502 16546.50 IOPS, 129.27 MiB/s [2024-11-20T13:33:07.460Z] [2024-11-20 14:33:07.447257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.502 [2024-11-20 14:33:07.447277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.761 [2024-11-20 14:33:07.461715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.761 [2024-11-20 14:33:07.461734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.761 [2024-11-20 14:33:07.475849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.761 [2024-11-20 14:33:07.475870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.761 [2024-11-20 14:33:07.486167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.761 [2024-11-20 14:33:07.486186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.761 [2024-11-20 14:33:07.500613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.761 [2024-11-20 14:33:07.500635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.761 [2024-11-20 14:33:07.514308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.761 [2024-11-20 14:33:07.514327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.761 [2024-11-20 14:33:07.528416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.761 [2024-11-20 14:33:07.528436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.761 [2024-11-20 14:33:07.542611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.761 [2024-11-20 14:33:07.542632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.761 [2024-11-20 14:33:07.553385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.761 [2024-11-20 14:33:07.553404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.761 [2024-11-20 14:33:07.568030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.761 [2024-11-20 14:33:07.568050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.761 [2024-11-20 14:33:07.581868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.761 [2024-11-20 14:33:07.581887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.761 [2024-11-20 14:33:07.595566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.761 [2024-11-20 14:33:07.595586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.761 [2024-11-20 14:33:07.609533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.761 [2024-11-20 14:33:07.609552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.761 [2024-11-20 14:33:07.623041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.761 [2024-11-20 14:33:07.623061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.761 [2024-11-20 14:33:07.637236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.761 [2024-11-20 14:33:07.637256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.761 [2024-11-20 14:33:07.650887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.761 [2024-11-20 14:33:07.650906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.761 [2024-11-20 14:33:07.664903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.761 [2024-11-20 14:33:07.664922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.761 [2024-11-20 14:33:07.678662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.761 [2024-11-20 14:33:07.678681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.761 [2024-11-20 14:33:07.692551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.761 [2024-11-20 14:33:07.692571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:55.761 [2024-11-20 14:33:07.706427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:55.761 [2024-11-20 14:33:07.706446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.720196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.720215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.734124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.734143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.747890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.747913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.761578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.761597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.775328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.775347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.788972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.788992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.803322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.803341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.814003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.814022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.823960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.823979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.838768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.838788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.849405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.849424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.863959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.863977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.878087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.878106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.891698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.891717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.906011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.906031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.919971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.920006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.929032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.929051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.943451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.943470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.956446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.956464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.020 [2024-11-20 14:33:07.970960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.020 [2024-11-20 14:33:07.970979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.279 [2024-11-20 14:33:07.985093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.279 [2024-11-20 14:33:07.985112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.279 [2024-11-20 14:33:07.996232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.279 [2024-11-20 14:33:07.996255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.279 [2024-11-20 14:33:08.010754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.279 [2024-11-20 14:33:08.010774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.279 [2024-11-20 14:33:08.024506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.279 [2024-11-20 14:33:08.024528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.279 [2024-11-20 14:33:08.038806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.279 [2024-11-20 14:33:08.038832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.279 [2024-11-20 14:33:08.049394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.279 [2024-11-20 14:33:08.049413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.279 [2024-11-20 14:33:08.059013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.279 [2024-11-20 14:33:08.059035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.279 [2024-11-20 14:33:08.068702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.279 [2024-11-20 14:33:08.068722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.279 [2024-11-20 14:33:08.083171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.279 [2024-11-20 14:33:08.083191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.280 [2024-11-20 14:33:08.097329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.280 [2024-11-20 14:33:08.097348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.280 [2024-11-20 14:33:08.108406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.280 [2024-11-20 14:33:08.108426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.280 [2024-11-20 14:33:08.123238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.280 [2024-11-20 14:33:08.123258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.280 [2024-11-20 14:33:08.134116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.280 [2024-11-20 14:33:08.134137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.280 [2024-11-20 14:33:08.148569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.280 [2024-11-20 14:33:08.148589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.280 [2024-11-20 14:33:08.162366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.280 [2024-11-20 14:33:08.162386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.280 [2024-11-20 14:33:08.176322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.280 [2024-11-20 14:33:08.176343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.280 [2024-11-20 14:33:08.190086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.280 [2024-11-20 14:33:08.190106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.280 [2024-11-20 14:33:08.203507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.280 [2024-11-20 14:33:08.203527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.280 [2024-11-20 14:33:08.217784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.280 [2024-11-20 14:33:08.217806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.280 [2024-11-20 14:33:08.231858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.280 [2024-11-20 14:33:08.231877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.539 [2024-11-20 14:33:08.245998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.539 [2024-11-20 14:33:08.246022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.539 [2024-11-20 14:33:08.256417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.539 [2024-11-20 14:33:08.256437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.539 [2024-11-20 14:33:08.265993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.539 [2024-11-20 14:33:08.266013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.539 [2024-11-20 14:33:08.280353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.539 [2024-11-20 14:33:08.280373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.539 [2024-11-20 14:33:08.294337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.539 [2024-11-20 14:33:08.294357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.539 [2024-11-20 14:33:08.308130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.539 [2024-11-20 14:33:08.308150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.539 [2024-11-20 14:33:08.322269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.539 [2024-11-20 14:33:08.322289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.539 [2024-11-20 14:33:08.336730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.539 [2024-11-20 14:33:08.336753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.539 [2024-11-20 14:33:08.347589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.539 [2024-11-20 14:33:08.347608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.539 [2024-11-20 14:33:08.361563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.539 [2024-11-20 14:33:08.361582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.539 [2024-11-20 14:33:08.375292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.539 [2024-11-20 14:33:08.375312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.539 [2024-11-20 14:33:08.389015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.539 [2024-11-20 14:33:08.389035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.539 [2024-11-20 14:33:08.402910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.539 [2024-11-20 14:33:08.402929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.539 [2024-11-20 14:33:08.417481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.539 [2024-11-20 14:33:08.417500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.539 [2024-11-20 14:33:08.432889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.539 [2024-11-20 14:33:08.432909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.539 16601.00 IOPS, 129.70 MiB/s [2024-11-20T13:33:08.497Z] [2024-11-20 14:33:08.447272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.539 [2024-11-20 14:33:08.447296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.539 [2024-11-20 14:33:08.461159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.539 [2024-11-20 14:33:08.461180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.539 [2024-11-20 14:33:08.475941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.539 [2024-11-20 14:33:08.475976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.539 [2024-11-20 14:33:08.490609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.539 [2024-11-20 14:33:08.490627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.799 [2024-11-20 14:33:08.505048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.799 [2024-11-20 14:33:08.505068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.799 [2024-11-20 14:33:08.519227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.799 [2024-11-20 14:33:08.519246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.799 [2024-11-20 14:33:08.530191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.799 [2024-11-20 14:33:08.530220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.799 [2024-11-20 14:33:08.539907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.799 [2024-11-20 14:33:08.539925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.799 [2024-11-20 14:33:08.554164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.799 [2024-11-20 14:33:08.554183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.799 [2024-11-20 14:33:08.567736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.799 [2024-11-20 14:33:08.567755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.799 [2024-11-20 14:33:08.582033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.799 [2024-11-20 14:33:08.582052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.799 [2024-11-20 14:33:08.593056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.799 [2024-11-20 14:33:08.593075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.799 [2024-11-20 14:33:08.607456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.799 [2024-11-20 14:33:08.607474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.799 [2024-11-20 14:33:08.621304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.799 [2024-11-20 14:33:08.621322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.799 [2024-11-20 14:33:08.636130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.799 [2024-11-20 14:33:08.636149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.799 [2024-11-20 14:33:08.651297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.799 [2024-11-20 14:33:08.651317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.799 [2024-11-20 14:33:08.665530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.799 [2024-11-20 14:33:08.665548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.799 [2024-11-20 14:33:08.676548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.799 [2024-11-20 14:33:08.676566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.799 [2024-11-20 14:33:08.690969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.799 [2024-11-20 14:33:08.690988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.799 [2024-11-20 14:33:08.704542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.799 [2024-11-20 14:33:08.704561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.799 [2024-11-20 14:33:08.718444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.799 [2024-11-20 14:33:08.718463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.799 [2024-11-20 14:33:08.732506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.799 [2024-11-20 14:33:08.732525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:56.799 [2024-11-20 14:33:08.746559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:56.799 [2024-11-20 14:33:08.746578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.058 [2024-11-20 14:33:08.760784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.058 [2024-11-20 14:33:08.760804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.058 [2024-11-20 14:33:08.772310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.058 [2024-11-20 14:33:08.772329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.058 [2024-11-20 14:33:08.786788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.058 [2024-11-20 14:33:08.786807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.058 [2024-11-20 14:33:08.800585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.058 [2024-11-20 14:33:08.800604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.058 [2024-11-20 14:33:08.814502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.058 [2024-11-20 14:33:08.814523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.058 [2024-11-20 14:33:08.828922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.058 [2024-11-20 14:33:08.828942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.058 [2024-11-20 14:33:08.843704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.058 [2024-11-20 14:33:08.843723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.058 [2024-11-20 14:33:08.859773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.058 [2024-11-20 14:33:08.859793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.058 [2024-11-20 14:33:08.874433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.058 [2024-11-20 14:33:08.874453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.058 [2024-11-20 14:33:08.889262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.058 [2024-11-20 14:33:08.889281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.058 [2024-11-20 14:33:08.903792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.058 [2024-11-20 14:33:08.903813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.058 [2024-11-20 14:33:08.917792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.058 [2024-11-20 14:33:08.917812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.058 [2024-11-20 14:33:08.932203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.058 [2024-11-20 14:33:08.932223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.058 [2024-11-20 14:33:08.943487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.058 [2024-11-20 14:33:08.943506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.058 [2024-11-20 14:33:08.958019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.058 [2024-11-20 14:33:08.958039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.058 [2024-11-20 14:33:08.971916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.058 [2024-11-20 14:33:08.971934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.058 [2024-11-20 14:33:08.986275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.058 [2024-11-20 14:33:08.986294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.058 [2024-11-20 14:33:09.000013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.058 [2024-11-20 14:33:09.000031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.058 [2024-11-20 14:33:09.013744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.058 [2024-11-20 14:33:09.013763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.317 [2024-11-20 14:33:09.028187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.317 [2024-11-20 14:33:09.028207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.317 [2024-11-20 14:33:09.042280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.317 [2024-11-20 14:33:09.042304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.317 [2024-11-20 14:33:09.056203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.317 [2024-11-20 14:33:09.056222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.317 [2024-11-20 14:33:09.069734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.317 [2024-11-20 14:33:09.069753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.317 [2024-11-20 14:33:09.083705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.317 [2024-11-20 14:33:09.083724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.317 [2024-11-20 14:33:09.097524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.317 [2024-11-20 14:33:09.097542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.317 [2024-11-20 14:33:09.111532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.317 [2024-11-20 14:33:09.111551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.317 [2024-11-20 14:33:09.125477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.317 [2024-11-20 14:33:09.125495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.317 [2024-11-20 14:33:09.139130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.317 [2024-11-20 14:33:09.139149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.318 [2024-11-20 14:33:09.153354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.318 [2024-11-20 14:33:09.153373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.318 [2024-11-20 14:33:09.167304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.318 [2024-11-20 14:33:09.167323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.318 [2024-11-20 14:33:09.181236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.318 [2024-11-20 14:33:09.181256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.318 [2024-11-20 14:33:09.195868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.318 [2024-11-20 14:33:09.195887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.318 [2024-11-20 14:33:09.211257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.318 [2024-11-20 14:33:09.211276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.318 [2024-11-20 14:33:09.225554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.318 [2024-11-20 14:33:09.225574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.318 [2024-11-20 14:33:09.239783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.318 [2024-11-20 14:33:09.239803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.318 [2024-11-20 14:33:09.250624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.318 [2024-11-20 14:33:09.250644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.318 [2024-11-20 14:33:09.265563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.318 [2024-11-20 14:33:09.265582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.577 [2024-11-20 14:33:09.281197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.577 [2024-11-20 14:33:09.281217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.577 [2024-11-20 14:33:09.295868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.577 [2024-11-20 14:33:09.295887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.577 [2024-11-20 14:33:09.306855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.577 [2024-11-20 14:33:09.306877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.577 [2024-11-20 14:33:09.321736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.577 [2024-11-20 14:33:09.321755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.577 [2024-11-20 14:33:09.337381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.577 [2024-11-20 14:33:09.337401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.577 [2024-11-20 14:33:09.351804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.577 [2024-11-20 14:33:09.351823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.577 [2024-11-20 14:33:09.365689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.577 [2024-11-20 14:33:09.365709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.577 [2024-11-20 14:33:09.379510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.577 [2024-11-20 14:33:09.379528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.577 [2024-11-20 14:33:09.393653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.577 [2024-11-20 14:33:09.393672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.577 [2024-11-20 14:33:09.407696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.577 [2024-11-20 14:33:09.407715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.577 [2024-11-20 14:33:09.421872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.577 [2024-11-20 14:33:09.421891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.577 [2024-11-20 14:33:09.436635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.577 [2024-11-20 14:33:09.436653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.577 16598.75 IOPS, 129.68 MiB/s [2024-11-20T13:33:09.536Z] [2024-11-20 14:33:09.452079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.578 [2024-11-20 14:33:09.452098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.578 [2024-11-20 14:33:09.466388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.578 [2024-11-20 14:33:09.466407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.578 [2024-11-20 14:33:09.480287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.578 [2024-11-20 14:33:09.480308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.578 [2024-11-20 14:33:09.494580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.578 [2024-11-20 14:33:09.494600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.578 [2024-11-20 14:33:09.508364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.578 [2024-11-20 14:33:09.508384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.578 [2024-11-20 14:33:09.522586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.578 [2024-11-20 14:33:09.522606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.837 [2024-11-20 14:33:09.536953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.837 [2024-11-20 14:33:09.536973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.838 [2024-11-20 14:33:09.551295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.838 [2024-11-20 14:33:09.551320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.838 [2024-11-20 14:33:09.564776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.838 [2024-11-20 14:33:09.564801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.838 [2024-11-20 14:33:09.579188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.838 [2024-11-20 14:33:09.579207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.838 [2024-11-20 14:33:09.590023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.838 [2024-11-20 14:33:09.590042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.838 [2024-11-20 14:33:09.604031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.838 [2024-11-20 14:33:09.604051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.838 [2024-11-20 14:33:09.617442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.838 [2024-11-20 14:33:09.617460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.838 [2024-11-20 14:33:09.631743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.838 [2024-11-20 14:33:09.631763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.838 [2024-11-20 14:33:09.642955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.838 [2024-11-20 14:33:09.642975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.838 [2024-11-20 14:33:09.657136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.838 [2024-11-20 14:33:09.657155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.838 [2024-11-20 14:33:09.670593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.838 [2024-11-20 14:33:09.670614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.838 [2024-11-20 14:33:09.684718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.838 [2024-11-20 14:33:09.684737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.838 [2024-11-20 14:33:09.698907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.838 [2024-11-20 14:33:09.698927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.838 [2024-11-20 14:33:09.712719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.838 [2024-11-20 14:33:09.712741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.838 [2024-11-20 14:33:09.723938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.838 [2024-11-20 14:33:09.723965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.838 [2024-11-20 14:33:09.738349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.838 [2024-11-20 14:33:09.738368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.838 [2024-11-20 14:33:09.752657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.838 [2024-11-20 14:33:09.752677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.838 [2024-11-20 14:33:09.766979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.838 [2024-11-20 14:33:09.766998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.838 [2024-11-20 14:33:09.777820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.838 [2024-11-20 14:33:09.777840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:57.838 [2024-11-20 14:33:09.792421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:57.838 [2024-11-20 14:33:09.792440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.097 [2024-11-20 14:33:09.805759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.097 [2024-11-20 14:33:09.805784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.097 [2024-11-20 14:33:09.820097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.097 [2024-11-20 14:33:09.820117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.097 [2024-11-20 14:33:09.833662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.097 [2024-11-20 14:33:09.833682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.098 [2024-11-20 14:33:09.847701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.098 [2024-11-20 14:33:09.847720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.098 [2024-11-20 14:33:09.861496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.098 [2024-11-20 14:33:09.861515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.098 [2024-11-20 14:33:09.875573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.098 [2024-11-20 14:33:09.875592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.098 [2024-11-20 14:33:09.889529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.098 [2024-11-20 14:33:09.889547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.098 [2024-11-20 14:33:09.903285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.098 [2024-11-20 14:33:09.903304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.098 [2024-11-20 14:33:09.917182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.098 [2024-11-20 14:33:09.917201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.098 [2024-11-20 14:33:09.931221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.098 [2024-11-20 14:33:09.931240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.098 [2024-11-20 14:33:09.945188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.098 [2024-11-20 14:33:09.945208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.098 [2024-11-20 14:33:09.959233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.098 [2024-11-20 14:33:09.959251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.098 [2024-11-20 14:33:09.972886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.098 [2024-11-20 14:33:09.972904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.098 [2024-11-20 14:33:09.987253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.098 [2024-11-20 14:33:09.987272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.098 [2024-11-20 14:33:10.000742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.098 [2024-11-20 14:33:10.000762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.098 [2024-11-20 14:33:10.015484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.098 [2024-11-20 14:33:10.015504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.098 [2024-11-20 14:33:10.026929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.098 [2024-11-20 14:33:10.026957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.098 [2024-11-20 14:33:10.041762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.098 [2024-11-20 14:33:10.041782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.357 [2024-11-20 14:33:10.055657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.357 [2024-11-20 14:33:10.055677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.357 [2024-11-20 14:33:10.069786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.357 [2024-11-20 14:33:10.069810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.357 [2024-11-20 14:33:10.084499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.357 [2024-11-20 14:33:10.084519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.357 [2024-11-20 14:33:10.095513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.357 [2024-11-20 14:33:10.095532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.357 [2024-11-20 14:33:10.110317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.357 [2024-11-20 14:33:10.110336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.357 [2024-11-20 14:33:10.124259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.357 [2024-11-20 14:33:10.124278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.357 [2024-11-20 14:33:10.135221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.357 [2024-11-20 14:33:10.135240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.357 [2024-11-20 14:33:10.150052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.357 [2024-11-20 14:33:10.150071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.357 [2024-11-20 14:33:10.161034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.357 [2024-11-20 14:33:10.161054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.357 [2024-11-20 14:33:10.175571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.357 [2024-11-20 14:33:10.175591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.357 [2024-11-20 14:33:10.189499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.357 [2024-11-20 14:33:10.189518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.357 [2024-11-20 14:33:10.203439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.357 [2024-11-20 14:33:10.203457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.357 [2024-11-20 14:33:10.217856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.357 [2024-11-20 14:33:10.217875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.357 [2024-11-20 14:33:10.232275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.357 [2024-11-20 14:33:10.232295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.357 [2024-11-20 14:33:10.246596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.357 [2024-11-20 14:33:10.246616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.357 [2024-11-20 14:33:10.260534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.357 [2024-11-20 14:33:10.260553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.357 [2024-11-20 14:33:10.274480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.357 [2024-11-20 14:33:10.274499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.357 [2024-11-20 14:33:10.288318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.357 [2024-11-20 14:33:10.288337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.357 [2024-11-20 14:33:10.302240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.357 [2024-11-20 14:33:10.302259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 [2024-11-20 14:33:10.316128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.316149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 [2024-11-20 14:33:10.330222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.330242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 [2024-11-20 14:33:10.344601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.344620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 [2024-11-20 14:33:10.359409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.359427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 [2024-11-20 14:33:10.374878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.374898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 [2024-11-20 14:33:10.388925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.388944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 [2024-11-20 14:33:10.402818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.402837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 [2024-11-20 14:33:10.417006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.417026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 [2024-11-20 14:33:10.430956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.430975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 [2024-11-20 14:33:10.445359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.445379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 16581.60 IOPS, 129.54 MiB/s [2024-11-20T13:33:10.575Z] [2024-11-20 14:33:10.453714] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.453732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 00:13:58.617 Latency(us) 00:13:58.617 [2024-11-20T13:33:10.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.617 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:58.617 Nvme1n1 : 5.01 16587.23 129.59 0.00 0.00 7709.65 3590.23 16070.57 00:13:58.617 [2024-11-20T13:33:10.575Z] =================================================================================================================== 00:13:58.617 [2024-11-20T13:33:10.575Z] Total : 16587.23 129.59 0.00 0.00 7709.65 3590.23 16070.57 00:13:58.617 [2024-11-20 14:33:10.465519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.465535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 [2024-11-20 14:33:10.477550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.477564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 [2024-11-20 14:33:10.489589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.489610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 [2024-11-20 14:33:10.501615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.501629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 [2024-11-20 14:33:10.513645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.513659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 [2024-11-20 14:33:10.525679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.525694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 [2024-11-20 14:33:10.537712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.537726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 [2024-11-20 14:33:10.549743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.549757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 [2024-11-20 14:33:10.561775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.561789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.617 [2024-11-20 14:33:10.573805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.617 [2024-11-20 14:33:10.573815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.877 [2024-11-20 14:33:10.585842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.877 [2024-11-20 14:33:10.585855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.877 [2024-11-20 14:33:10.597872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.877 [2024-11-20 14:33:10.597883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.877 [2024-11-20 14:33:10.609904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:58.877 [2024-11-20 14:33:10.609913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:58.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1471299) - No such process 00:13:58.877 14:33:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1471299 00:13:58.877 14:33:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.877 14:33:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.877 14:33:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:58.877 14:33:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.877 14:33:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:58.877 14:33:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.877 14:33:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:58.877 delay0 00:13:58.877 14:33:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.877 14:33:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:58.877 14:33:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.877 14:33:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:58.877 14:33:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.877 14:33:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:58.877 [2024-11-20 14:33:10.765082] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:06.994 [2024-11-20 14:33:17.425677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98c070 is same with the state(6) to be set 00:14:06.994 [2024-11-20 14:33:17.425717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98c070 is same with the state(6) to be set 00:14:06.994 Initializing NVMe Controllers 00:14:06.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:06.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:06.994 Initialization complete. Launching workers. 00:14:06.994 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 5798 00:14:06.994 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 6066, failed to submit 52 00:14:06.994 success 5916, unsuccessful 150, failed 0 00:14:06.994 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:06.994 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:06.994 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:06.994 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:14:06.994 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:06.994 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:14:06.994 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:06.994 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:06.994 rmmod nvme_tcp 00:14:06.994 rmmod nvme_fabrics 00:14:06.994 rmmod nvme_keyring 00:14:06.994 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1469305 ']' 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1469305 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1469305 ']' 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1469305 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1469305 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1469305' 00:14:06.995 killing process with pid 1469305 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1469305 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1469305 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.995 14:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.933 14:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:07.933 00:14:07.933 real 0m32.072s 00:14:07.933 user 0m43.048s 00:14:07.933 sys 0m11.352s 00:14:07.933 14:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.933 14:33:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:07.933 ************************************ 00:14:07.933 END TEST nvmf_zcopy 00:14:07.933 ************************************ 00:14:07.933 14:33:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:07.933 14:33:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:07.933 14:33:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.933 14:33:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:07.933 ************************************ 00:14:07.933 START TEST nvmf_nmic 00:14:07.933 ************************************ 00:14:07.933 14:33:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:08.193 * Looking for test storage... 00:14:08.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.193 14:33:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:08.193 14:33:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:14:08.193 14:33:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:08.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.193 --rc genhtml_branch_coverage=1 00:14:08.193 --rc genhtml_function_coverage=1 00:14:08.193 --rc genhtml_legend=1 00:14:08.193 --rc geninfo_all_blocks=1 00:14:08.193 --rc geninfo_unexecuted_blocks=1 00:14:08.193 00:14:08.193 ' 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:08.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.193 --rc genhtml_branch_coverage=1 00:14:08.193 --rc genhtml_function_coverage=1 00:14:08.193 --rc genhtml_legend=1 00:14:08.193 --rc geninfo_all_blocks=1 00:14:08.193 --rc geninfo_unexecuted_blocks=1 00:14:08.193 00:14:08.193 ' 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:08.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.193 --rc genhtml_branch_coverage=1 00:14:08.193 --rc genhtml_function_coverage=1 00:14:08.193 --rc genhtml_legend=1 00:14:08.193 --rc geninfo_all_blocks=1 00:14:08.193 --rc geninfo_unexecuted_blocks=1 00:14:08.193 00:14:08.193 ' 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:08.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.193 --rc genhtml_branch_coverage=1 00:14:08.193 --rc genhtml_function_coverage=1 00:14:08.193 --rc genhtml_legend=1 00:14:08.193 --rc geninfo_all_blocks=1 00:14:08.193 --rc geninfo_unexecuted_blocks=1 00:14:08.193 00:14:08.193 ' 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.193 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:08.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:14:08.194 14:33:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:14.767 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:14.767 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:14:14.767 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:14.767 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:14.767 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:14.767 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:14.767 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:14.767 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:14:14.767 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:14.767 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:14:14.767 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:14:14.767 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:14:14.767 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:14:14.767 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:14:14.767 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:14:14.767 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.767 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:14.768 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:14.768 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:14.768 Found net devices under 0000:86:00.0: cvl_0_0 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:14.768 Found net devices under 0000:86:00.1: cvl_0_1 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:14.768 14:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:14.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:14:14.768 00:14:14.768 --- 10.0.0.2 ping statistics --- 00:14:14.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.768 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:14.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:14:14.768 00:14:14.768 --- 10.0.0.1 ping statistics --- 00:14:14.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.768 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1477295 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1477295 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1477295 ']' 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.768 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.769 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.769 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.769 14:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:14.769 [2024-11-20 14:33:26.176454] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:14:14.769 [2024-11-20 14:33:26.176504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.769 [2024-11-20 14:33:26.258721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:14.769 [2024-11-20 14:33:26.302164] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.769 [2024-11-20 14:33:26.302202] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.769 [2024-11-20 14:33:26.302209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.769 [2024-11-20 14:33:26.302216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.769 [2024-11-20 14:33:26.302221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.769 [2024-11-20 14:33:26.303822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.769 [2024-11-20 14:33:26.303929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.769 [2024-11-20 14:33:26.303955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:14.769 [2024-11-20 14:33:26.303962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.336 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:15.337 [2024-11-20 14:33:27.061193] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:15.337 Malloc0 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:15.337 [2024-11-20 14:33:27.120744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:15.337 test case1: single bdev can't be used in multiple subsystems 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:15.337 [2024-11-20 14:33:27.148661] bdev.c:8526:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:15.337 [2024-11-20 14:33:27.148682] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:15.337 [2024-11-20 14:33:27.148690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.337 request: 00:14:15.337 { 00:14:15.337 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:15.337 "namespace": { 00:14:15.337 "bdev_name": "Malloc0", 00:14:15.337 "no_auto_visible": false, 00:14:15.337 "hide_metadata": false 00:14:15.337 }, 00:14:15.337 "method": "nvmf_subsystem_add_ns", 00:14:15.337 "req_id": 1 00:14:15.337 } 00:14:15.337 Got JSON-RPC error response 00:14:15.337 response: 00:14:15.337 { 00:14:15.337 "code": -32602, 00:14:15.337 "message": "Invalid parameters" 00:14:15.337 } 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:15.337 Adding namespace failed - expected result. 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:15.337 test case2: host connect to nvmf target in multiple paths 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:15.337 [2024-11-20 14:33:27.160794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.337 14:33:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:16.712 14:33:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:17.646 14:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:17.646 14:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:14:17.646 14:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:17.646 14:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:17.646 14:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:14:20.185 14:33:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:20.185 14:33:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:20.185 14:33:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:20.185 14:33:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:20.185 14:33:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:20.185 14:33:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:14:20.185 14:33:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:20.185 [global] 00:14:20.185 thread=1 00:14:20.185 invalidate=1 00:14:20.185 rw=write 00:14:20.185 time_based=1 00:14:20.185 runtime=1 00:14:20.185 ioengine=libaio 00:14:20.185 direct=1 00:14:20.185 bs=4096 00:14:20.185 iodepth=1 00:14:20.185 norandommap=0 00:14:20.185 numjobs=1 00:14:20.185 00:14:20.185 verify_dump=1 00:14:20.185 verify_backlog=512 00:14:20.185 verify_state_save=0 00:14:20.185 do_verify=1 00:14:20.185 verify=crc32c-intel 00:14:20.185 [job0] 00:14:20.185 filename=/dev/nvme0n1 00:14:20.185 Could not set queue depth (nvme0n1) 00:14:20.185 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:20.185 fio-3.35 00:14:20.185 Starting 1 thread 00:14:21.121 00:14:21.121 job0: (groupid=0, jobs=1): err= 0: pid=1478373: Wed Nov 20 14:33:32 2024 00:14:21.121 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:14:21.121 slat (nsec): min=6581, max=29571, avg=7568.34, stdev=1262.83 00:14:21.121 clat (usec): min=160, max=526, avg=195.35, stdev=28.79 00:14:21.121 lat (usec): min=167, max=533, avg=202.92, stdev=28.86 00:14:21.121 clat percentiles (usec): 00:14:21.121 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 178], 00:14:21.121 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:14:21.121 | 70.00th=[ 190], 80.00th=[ 208], 90.00th=[ 243], 95.00th=[ 269], 00:14:21.121 | 99.00th=[ 281], 99.50th=[ 285], 99.90th=[ 371], 99.95th=[ 420], 00:14:21.121 | 99.99th=[ 529] 00:14:21.121 write: IOPS=2915, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec); 0 zone resets 00:14:21.121 slat (usec): min=9, max=24174, avg=19.00, stdev=447.32 00:14:21.121 clat (usec): min=109, max=386, avg=141.71, stdev=27.81 00:14:21.121 lat (usec): min=123, max=24560, avg=160.72, stdev=452.70 00:14:21.121 clat percentiles (usec): 00:14:21.121 | 1.00th=[ 119], 5.00th=[ 122], 10.00th=[ 124], 20.00th=[ 126], 00:14:21.121 | 30.00th=[ 128], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 135], 00:14:21.121 | 70.00th=[ 139], 80.00th=[ 157], 90.00th=[ 178], 95.00th=[ 188], 00:14:21.121 | 99.00th=[ 245], 99.50th=[ 258], 99.90th=[ 273], 99.95th=[ 375], 00:14:21.121 | 99.99th=[ 388] 00:14:21.121 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:14:21.121 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:21.121 lat (usec) : 250=95.69%, 500=4.29%, 750=0.02% 00:14:21.121 cpu : usr=2.60%, sys=5.20%, ctx=5481, majf=0, minf=1 00:14:21.121 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:21.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.121 issued rwts: total=2560,2918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:21.121 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:21.121 00:14:21.121 Run status group 0 (all jobs): 00:14:21.121 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:14:21.121 WRITE: bw=11.4MiB/s (11.9MB/s), 11.4MiB/s-11.4MiB/s (11.9MB/s-11.9MB/s), io=11.4MiB (12.0MB), run=1001-1001msec 00:14:21.121 00:14:21.121 Disk stats (read/write): 00:14:21.121 nvme0n1: ios=2322/2560, merge=0/0, ticks=1427/363, in_queue=1790, util=98.60% 00:14:21.121 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:21.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:21.380 rmmod nvme_tcp 00:14:21.380 rmmod nvme_fabrics 00:14:21.380 rmmod nvme_keyring 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1477295 ']' 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1477295 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1477295 ']' 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1477295 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1477295 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1477295' 00:14:21.380 killing process with pid 1477295 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1477295 00:14:21.380 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1477295 00:14:21.640 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:21.640 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:21.640 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:21.640 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:14:21.640 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:14:21.640 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:21.640 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:14:21.640 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:21.640 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:21.640 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.640 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.640 14:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:24.175 00:14:24.175 real 0m15.659s 00:14:24.175 user 0m36.056s 00:14:24.175 sys 0m5.475s 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:24.175 ************************************ 00:14:24.175 END TEST nvmf_nmic 00:14:24.175 ************************************ 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:24.175 ************************************ 00:14:24.175 START TEST nvmf_fio_target 00:14:24.175 ************************************ 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:24.175 * Looking for test storage... 00:14:24.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:24.175 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:24.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.176 --rc genhtml_branch_coverage=1 00:14:24.176 --rc genhtml_function_coverage=1 00:14:24.176 --rc genhtml_legend=1 00:14:24.176 --rc geninfo_all_blocks=1 00:14:24.176 --rc geninfo_unexecuted_blocks=1 00:14:24.176 00:14:24.176 ' 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:24.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.176 --rc genhtml_branch_coverage=1 00:14:24.176 --rc genhtml_function_coverage=1 00:14:24.176 --rc genhtml_legend=1 00:14:24.176 --rc geninfo_all_blocks=1 00:14:24.176 --rc geninfo_unexecuted_blocks=1 00:14:24.176 00:14:24.176 ' 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:24.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.176 --rc genhtml_branch_coverage=1 00:14:24.176 --rc genhtml_function_coverage=1 00:14:24.176 --rc genhtml_legend=1 00:14:24.176 --rc geninfo_all_blocks=1 00:14:24.176 --rc geninfo_unexecuted_blocks=1 00:14:24.176 00:14:24.176 ' 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:24.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.176 --rc genhtml_branch_coverage=1 00:14:24.176 --rc genhtml_function_coverage=1 00:14:24.176 --rc genhtml_legend=1 00:14:24.176 --rc geninfo_all_blocks=1 00:14:24.176 --rc geninfo_unexecuted_blocks=1 00:14:24.176 00:14:24.176 ' 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:24.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:24.176 14:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:30.746 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:30.746 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:30.746 Found net devices under 0000:86:00.0: cvl_0_0 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:30.746 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:30.747 Found net devices under 0000:86:00.1: cvl_0_1 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:30.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:14:30.747 00:14:30.747 --- 10.0.0.2 ping statistics --- 00:14:30.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.747 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:30.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:14:30.747 00:14:30.747 --- 10.0.0.1 ping statistics --- 00:14:30.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.747 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1482145 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1482145 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1482145 ']' 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:30.747 14:33:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.747 [2024-11-20 14:33:41.840909] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:14:30.747 [2024-11-20 14:33:41.840969] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.747 [2024-11-20 14:33:41.921469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:30.747 [2024-11-20 14:33:41.965031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.747 [2024-11-20 14:33:41.965068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.747 [2024-11-20 14:33:41.965076] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.747 [2024-11-20 14:33:41.965082] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.747 [2024-11-20 14:33:41.965087] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.747 [2024-11-20 14:33:41.966703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.747 [2024-11-20 14:33:41.966822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.747 [2024-11-20 14:33:41.966932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.747 [2024-11-20 14:33:41.966933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.747 14:33:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.747 14:33:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:14:30.747 14:33:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:30.747 14:33:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:30.747 14:33:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.747 14:33:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.747 14:33:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:30.747 [2024-11-20 14:33:42.282525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.747 14:33:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:30.747 14:33:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:30.747 14:33:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:31.005 14:33:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:31.005 14:33:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:31.005 14:33:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:31.005 14:33:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:31.263 14:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:31.263 14:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:31.521 14:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:31.779 14:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:31.779 14:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:32.037 14:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:32.037 14:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:32.296 14:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:32.296 14:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:32.296 14:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:32.554 14:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:32.554 14:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:32.813 14:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:32.813 14:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:33.071 14:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.071 [2024-11-20 14:33:45.010281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.330 14:33:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:33.330 14:33:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:33.589 14:33:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:34.962 14:33:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:34.962 14:33:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:14:34.962 14:33:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:34.962 14:33:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:14:34.962 14:33:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:14:34.962 14:33:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:14:36.865 14:33:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:36.865 14:33:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:36.865 14:33:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:36.865 14:33:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:14:36.865 14:33:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:36.865 14:33:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:14:36.865 14:33:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:36.865 [global] 00:14:36.865 thread=1 00:14:36.865 invalidate=1 00:14:36.865 rw=write 00:14:36.865 time_based=1 00:14:36.865 runtime=1 00:14:36.865 ioengine=libaio 00:14:36.865 direct=1 00:14:36.865 bs=4096 00:14:36.865 iodepth=1 00:14:36.865 norandommap=0 00:14:36.865 numjobs=1 00:14:36.865 00:14:36.865 verify_dump=1 00:14:36.865 verify_backlog=512 00:14:36.865 verify_state_save=0 00:14:36.865 do_verify=1 00:14:36.865 verify=crc32c-intel 00:14:36.865 [job0] 00:14:36.865 filename=/dev/nvme0n1 00:14:36.865 [job1] 00:14:36.865 filename=/dev/nvme0n2 00:14:36.865 [job2] 00:14:36.865 filename=/dev/nvme0n3 00:14:36.865 [job3] 00:14:36.865 filename=/dev/nvme0n4 00:14:36.865 Could not set queue depth (nvme0n1) 00:14:36.865 Could not set queue depth (nvme0n2) 00:14:36.865 Could not set queue depth (nvme0n3) 00:14:36.865 Could not set queue depth (nvme0n4) 00:14:37.124 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:37.124 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:37.124 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:37.124 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:37.124 fio-3.35 00:14:37.124 Starting 4 threads 00:14:38.584 00:14:38.584 job0: (groupid=0, jobs=1): err= 0: pid=1483549: Wed Nov 20 14:33:50 2024 00:14:38.584 read: IOPS=2004, BW=8020KiB/s (8212kB/s)(8204KiB/1023msec) 00:14:38.584 slat (nsec): min=8176, max=32209, avg=9034.41, stdev=1060.42 00:14:38.584 clat (usec): min=165, max=41187, avg=273.58, stdev=1561.17 00:14:38.584 lat (usec): min=174, max=41198, avg=282.62, stdev=1561.53 00:14:38.584 clat percentiles (usec): 00:14:38.584 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:14:38.584 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 212], 00:14:38.584 | 70.00th=[ 225], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 253], 00:14:38.584 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[41157], 99.95th=[41157], 00:14:38.584 | 99.99th=[41157] 00:14:38.584 write: IOPS=2502, BW=9.77MiB/s (10.2MB/s)(10.0MiB/1023msec); 0 zone resets 00:14:38.584 slat (usec): min=10, max=144, avg=11.91, stdev= 3.00 00:14:38.584 clat (usec): min=116, max=305, avg=155.80, stdev=21.95 00:14:38.584 lat (usec): min=128, max=427, avg=167.71, stdev=22.54 00:14:38.584 clat percentiles (usec): 00:14:38.584 | 1.00th=[ 123], 5.00th=[ 131], 10.00th=[ 137], 20.00th=[ 141], 00:14:38.584 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 153], 00:14:38.585 | 70.00th=[ 159], 80.00th=[ 172], 90.00th=[ 190], 95.00th=[ 196], 00:14:38.585 | 99.00th=[ 223], 99.50th=[ 258], 99.90th=[ 277], 99.95th=[ 285], 00:14:38.585 | 99.99th=[ 306] 00:14:38.585 bw ( KiB/s): min= 8192, max=12288, per=36.89%, avg=10240.00, stdev=2896.31, samples=2 00:14:38.585 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:14:38.585 lat (usec) : 250=95.81%, 500=4.12% 00:14:38.585 lat (msec) : 50=0.07% 00:14:38.585 cpu : usr=2.84%, sys=4.89%, ctx=4611, majf=0, minf=2 00:14:38.585 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:38.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.585 issued rwts: total=2051,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.585 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:38.585 job1: (groupid=0, jobs=1): err= 0: pid=1483567: Wed Nov 20 14:33:50 2024 00:14:38.585 read: IOPS=1492, BW=5971KiB/s (6114kB/s)(6168KiB/1033msec) 00:14:38.585 slat (nsec): min=7035, max=36936, avg=8187.64, stdev=1456.78 00:14:38.585 clat (usec): min=190, max=41032, avg=392.53, stdev=2069.39 00:14:38.585 lat (usec): min=199, max=41053, avg=400.71, stdev=2069.99 00:14:38.585 clat percentiles (usec): 00:14:38.585 | 1.00th=[ 208], 5.00th=[ 227], 10.00th=[ 239], 20.00th=[ 251], 00:14:38.585 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:14:38.585 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 355], 95.00th=[ 469], 00:14:38.585 | 99.00th=[ 510], 99.50th=[ 545], 99.90th=[41157], 99.95th=[41157], 00:14:38.585 | 99.99th=[41157] 00:14:38.585 write: IOPS=1982, BW=7930KiB/s (8121kB/s)(8192KiB/1033msec); 0 zone resets 00:14:38.585 slat (nsec): min=10214, max=35871, avg=11327.05, stdev=1646.69 00:14:38.585 clat (usec): min=119, max=319, avg=186.23, stdev=34.60 00:14:38.585 lat (usec): min=130, max=330, avg=197.55, stdev=34.71 00:14:38.585 clat percentiles (usec): 00:14:38.585 | 1.00th=[ 133], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 159], 00:14:38.585 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 186], 00:14:38.585 | 70.00th=[ 196], 80.00th=[ 219], 90.00th=[ 243], 95.00th=[ 255], 00:14:38.585 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 297], 99.95th=[ 310], 00:14:38.585 | 99.99th=[ 318] 00:14:38.585 bw ( KiB/s): min= 8192, max= 8192, per=29.51%, avg=8192.00, stdev= 0.00, samples=2 00:14:38.585 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:14:38.585 lat (usec) : 250=61.48%, 500=37.72%, 750=0.67% 00:14:38.585 lat (msec) : 4=0.03%, 50=0.11% 00:14:38.585 cpu : usr=3.78%, sys=4.65%, ctx=3590, majf=0, minf=1 00:14:38.585 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:38.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.585 issued rwts: total=1542,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.585 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:38.585 job2: (groupid=0, jobs=1): err= 0: pid=1483587: Wed Nov 20 14:33:50 2024 00:14:38.585 read: IOPS=1534, BW=6140KiB/s (6287kB/s)(6152KiB/1002msec) 00:14:38.585 slat (nsec): min=7506, max=23556, avg=8573.72, stdev=1199.78 00:14:38.585 clat (usec): min=209, max=40815, avg=370.87, stdev=1457.91 00:14:38.585 lat (usec): min=218, max=40838, avg=379.45, stdev=1458.22 00:14:38.585 clat percentiles (usec): 00:14:38.585 | 1.00th=[ 231], 5.00th=[ 243], 10.00th=[ 251], 20.00th=[ 260], 00:14:38.585 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 289], 00:14:38.585 | 70.00th=[ 310], 80.00th=[ 412], 90.00th=[ 486], 95.00th=[ 502], 00:14:38.585 | 99.00th=[ 523], 99.50th=[ 529], 99.90th=[40633], 99.95th=[40633], 00:14:38.585 | 99.99th=[40633] 00:14:38.585 write: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec); 0 zone resets 00:14:38.585 slat (nsec): min=10720, max=43848, avg=12616.76, stdev=2109.07 00:14:38.585 clat (usec): min=117, max=3650, avg=186.22, stdev=85.03 00:14:38.585 lat (usec): min=128, max=3662, avg=198.84, stdev=85.16 00:14:38.585 clat percentiles (usec): 00:14:38.585 | 1.00th=[ 122], 5.00th=[ 141], 10.00th=[ 153], 20.00th=[ 159], 00:14:38.585 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 180], 00:14:38.585 | 70.00th=[ 190], 80.00th=[ 237], 90.00th=[ 241], 95.00th=[ 245], 00:14:38.585 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 383], 99.95th=[ 668], 00:14:38.585 | 99.99th=[ 3654] 00:14:38.585 bw ( KiB/s): min= 8192, max= 8192, per=29.51%, avg=8192.00, stdev= 0.00, samples=2 00:14:38.585 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:14:38.585 lat (usec) : 250=59.45%, 500=38.06%, 750=2.40% 00:14:38.585 lat (msec) : 4=0.03%, 50=0.06% 00:14:38.585 cpu : usr=3.39%, sys=5.39%, ctx=3588, majf=0, minf=1 00:14:38.585 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:38.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.585 issued rwts: total=1538,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.585 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:38.585 job3: (groupid=0, jobs=1): err= 0: pid=1483594: Wed Nov 20 14:33:50 2024 00:14:38.585 read: IOPS=383, BW=1532KiB/s (1569kB/s)(1552KiB/1013msec) 00:14:38.585 slat (nsec): min=7800, max=35932, avg=9755.99, stdev=3044.16 00:14:38.585 clat (usec): min=210, max=41128, avg=2347.46, stdev=9020.00 00:14:38.585 lat (usec): min=220, max=41150, avg=2357.22, stdev=9022.24 00:14:38.585 clat percentiles (usec): 00:14:38.585 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:14:38.585 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:14:38.585 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[40633], 00:14:38.585 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:38.585 | 99.99th=[41157] 00:14:38.585 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:14:38.585 slat (nsec): min=10626, max=37297, avg=12502.02, stdev=2290.80 00:14:38.585 clat (usec): min=146, max=305, avg=172.68, stdev=14.39 00:14:38.585 lat (usec): min=158, max=336, avg=185.18, stdev=15.36 00:14:38.585 clat percentiles (usec): 00:14:38.585 | 1.00th=[ 151], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:14:38.585 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:14:38.585 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 196], 00:14:38.585 | 99.00th=[ 210], 99.50th=[ 225], 99.90th=[ 306], 99.95th=[ 306], 00:14:38.585 | 99.99th=[ 306] 00:14:38.585 bw ( KiB/s): min= 4096, max= 4096, per=14.76%, avg=4096.00, stdev= 0.00, samples=1 00:14:38.585 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:38.585 lat (usec) : 250=84.44%, 500=13.33% 00:14:38.585 lat (msec) : 50=2.22% 00:14:38.585 cpu : usr=0.40%, sys=1.19%, ctx=900, majf=0, minf=2 00:14:38.585 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:38.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.585 issued rwts: total=388,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.585 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:38.585 00:14:38.585 Run status group 0 (all jobs): 00:14:38.585 READ: bw=20.9MiB/s (21.9MB/s), 1532KiB/s-8020KiB/s (1569kB/s-8212kB/s), io=21.6MiB (22.6MB), run=1002-1033msec 00:14:38.585 WRITE: bw=27.1MiB/s (28.4MB/s), 2022KiB/s-9.77MiB/s (2070kB/s-10.2MB/s), io=28.0MiB (29.4MB), run=1002-1033msec 00:14:38.585 00:14:38.585 Disk stats (read/write): 00:14:38.585 nvme0n1: ios=2098/2287, merge=0/0, ticks=449/347, in_queue=796, util=86.47% 00:14:38.586 nvme0n2: ios=1586/1959, merge=0/0, ticks=487/345, in_queue=832, util=90.65% 00:14:38.586 nvme0n3: ios=1559/1679, merge=0/0, ticks=1367/296, in_queue=1663, util=93.53% 00:14:38.586 nvme0n4: ios=441/512, merge=0/0, ticks=827/87, in_queue=914, util=95.58% 00:14:38.586 14:33:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:38.586 [global] 00:14:38.586 thread=1 00:14:38.586 invalidate=1 00:14:38.586 rw=randwrite 00:14:38.586 time_based=1 00:14:38.586 runtime=1 00:14:38.586 ioengine=libaio 00:14:38.586 direct=1 00:14:38.586 bs=4096 00:14:38.586 iodepth=1 00:14:38.586 norandommap=0 00:14:38.586 numjobs=1 00:14:38.586 00:14:38.586 verify_dump=1 00:14:38.586 verify_backlog=512 00:14:38.586 verify_state_save=0 00:14:38.586 do_verify=1 00:14:38.586 verify=crc32c-intel 00:14:38.586 [job0] 00:14:38.586 filename=/dev/nvme0n1 00:14:38.586 [job1] 00:14:38.586 filename=/dev/nvme0n2 00:14:38.586 [job2] 00:14:38.586 filename=/dev/nvme0n3 00:14:38.586 [job3] 00:14:38.586 filename=/dev/nvme0n4 00:14:38.586 Could not set queue depth (nvme0n1) 00:14:38.586 Could not set queue depth (nvme0n2) 00:14:38.586 Could not set queue depth (nvme0n3) 00:14:38.586 Could not set queue depth (nvme0n4) 00:14:38.869 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:38.869 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:38.869 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:38.869 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:38.869 fio-3.35 00:14:38.869 Starting 4 threads 00:14:40.280 00:14:40.280 job0: (groupid=0, jobs=1): err= 0: pid=1484023: Wed Nov 20 14:33:51 2024 00:14:40.280 read: IOPS=22, BW=88.8KiB/s (90.9kB/s)(92.0KiB/1036msec) 00:14:40.280 slat (nsec): min=9904, max=23230, avg=17768.87, stdev=5471.35 00:14:40.280 clat (usec): min=40882, max=42095, avg=41074.02, stdev=321.27 00:14:40.280 lat (usec): min=40905, max=42105, avg=41091.79, stdev=318.53 00:14:40.280 clat percentiles (usec): 00:14:40.280 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:14:40.280 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:40.280 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:14:40.280 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:40.280 | 99.99th=[42206] 00:14:40.280 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:14:40.280 slat (nsec): min=9052, max=39078, avg=9991.12, stdev=1555.34 00:14:40.280 clat (usec): min=123, max=298, avg=165.21, stdev=16.95 00:14:40.280 lat (usec): min=132, max=337, avg=175.20, stdev=17.48 00:14:40.280 clat percentiles (usec): 00:14:40.280 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:14:40.280 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:14:40.280 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 192], 00:14:40.280 | 99.00th=[ 208], 99.50th=[ 229], 99.90th=[ 297], 99.95th=[ 297], 00:14:40.280 | 99.99th=[ 297] 00:14:40.280 bw ( KiB/s): min= 4096, max= 4096, per=15.69%, avg=4096.00, stdev= 0.00, samples=1 00:14:40.280 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:40.280 lat (usec) : 250=95.51%, 500=0.19% 00:14:40.280 lat (msec) : 50=4.30% 00:14:40.280 cpu : usr=0.39%, sys=0.29%, ctx=535, majf=0, minf=1 00:14:40.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:40.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.280 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:40.280 job1: (groupid=0, jobs=1): err= 0: pid=1484040: Wed Nov 20 14:33:51 2024 00:14:40.280 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:14:40.280 slat (nsec): min=7344, max=33346, avg=8323.64, stdev=1437.19 00:14:40.280 clat (usec): min=171, max=479, avg=205.40, stdev=15.94 00:14:40.280 lat (usec): min=179, max=487, avg=213.73, stdev=16.00 00:14:40.280 clat percentiles (usec): 00:14:40.280 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:14:40.280 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:14:40.280 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 229], 00:14:40.280 | 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 383], 99.95th=[ 408], 00:14:40.280 | 99.99th=[ 478] 00:14:40.280 write: IOPS=2664, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec); 0 zone resets 00:14:40.280 slat (nsec): min=10356, max=44835, avg=11561.69, stdev=1672.24 00:14:40.280 clat (usec): min=121, max=345, avg=152.61, stdev=13.75 00:14:40.280 lat (usec): min=132, max=360, avg=164.17, stdev=13.96 00:14:40.280 clat percentiles (usec): 00:14:40.280 | 1.00th=[ 128], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:14:40.280 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 155], 00:14:40.280 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 176], 00:14:40.280 | 99.00th=[ 190], 99.50th=[ 198], 99.90th=[ 265], 99.95th=[ 297], 00:14:40.280 | 99.99th=[ 347] 00:14:40.280 bw ( KiB/s): min=12288, max=12288, per=47.06%, avg=12288.00, stdev= 0.00, samples=1 00:14:40.280 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:40.280 lat (usec) : 250=99.45%, 500=0.55% 00:14:40.280 cpu : usr=4.90%, sys=7.60%, ctx=5229, majf=0, minf=1 00:14:40.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:40.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.280 issued rwts: total=2560,2667,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:40.280 job2: (groupid=0, jobs=1): err= 0: pid=1484061: Wed Nov 20 14:33:51 2024 00:14:40.280 read: IOPS=2075, BW=8304KiB/s (8503kB/s)(8312KiB/1001msec) 00:14:40.280 slat (nsec): min=8414, max=25599, avg=9328.90, stdev=1140.83 00:14:40.280 clat (usec): min=189, max=488, avg=227.82, stdev=17.40 00:14:40.280 lat (usec): min=198, max=498, avg=237.15, stdev=17.43 00:14:40.280 clat percentiles (usec): 00:14:40.280 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:14:40.280 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:14:40.280 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 258], 00:14:40.280 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 285], 99.95th=[ 289], 00:14:40.280 | 99.99th=[ 490] 00:14:40.280 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:14:40.280 slat (nsec): min=11652, max=39467, avg=12707.52, stdev=1610.00 00:14:40.280 clat (usec): min=137, max=290, avg=179.60, stdev=28.30 00:14:40.280 lat (usec): min=149, max=329, avg=192.31, stdev=28.38 00:14:40.280 clat percentiles (usec): 00:14:40.280 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:14:40.280 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:14:40.280 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 204], 95.00th=[ 265], 00:14:40.280 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 285], 99.95th=[ 289], 00:14:40.280 | 99.99th=[ 289] 00:14:40.280 bw ( KiB/s): min= 9760, max= 9760, per=37.38%, avg=9760.00, stdev= 0.00, samples=1 00:14:40.280 iops : min= 2440, max= 2440, avg=2440.00, stdev= 0.00, samples=1 00:14:40.280 lat (usec) : 250=91.38%, 500=8.62% 00:14:40.280 cpu : usr=3.00%, sys=9.30%, ctx=4638, majf=0, minf=1 00:14:40.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:40.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.280 issued rwts: total=2078,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:40.280 job3: (groupid=0, jobs=1): err= 0: pid=1484067: Wed Nov 20 14:33:51 2024 00:14:40.280 read: IOPS=677, BW=2712KiB/s (2777kB/s)(2728KiB/1006msec) 00:14:40.280 slat (nsec): min=7204, max=27002, avg=9835.71, stdev=2176.65 00:14:40.280 clat (usec): min=189, max=41001, avg=1179.23, stdev=6148.24 00:14:40.280 lat (usec): min=198, max=41014, avg=1189.06, stdev=6149.06 00:14:40.280 clat percentiles (usec): 00:14:40.280 | 1.00th=[ 194], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 215], 00:14:40.280 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:14:40.280 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 258], 00:14:40.280 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:40.280 | 99.99th=[41157] 00:14:40.280 write: IOPS=1017, BW=4072KiB/s (4169kB/s)(4096KiB/1006msec); 0 zone resets 00:14:40.280 slat (nsec): min=11428, max=35258, avg=12585.61, stdev=1593.73 00:14:40.280 clat (usec): min=143, max=322, avg=171.87, stdev=14.57 00:14:40.280 lat (usec): min=155, max=358, avg=184.45, stdev=14.98 00:14:40.280 clat percentiles (usec): 00:14:40.280 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:14:40.280 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:14:40.280 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 198], 00:14:40.280 | 99.00th=[ 208], 99.50th=[ 219], 99.90th=[ 277], 99.95th=[ 322], 00:14:40.280 | 99.99th=[ 322] 00:14:40.280 bw ( KiB/s): min= 8192, max= 8192, per=31.37%, avg=8192.00, stdev= 0.00, samples=1 00:14:40.280 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:40.280 lat (usec) : 250=96.01%, 500=3.05% 00:14:40.280 lat (msec) : 50=0.94% 00:14:40.280 cpu : usr=1.39%, sys=2.99%, ctx=1706, majf=0, minf=1 00:14:40.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:40.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.280 issued rwts: total=682,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:40.280 00:14:40.280 Run status group 0 (all jobs): 00:14:40.280 READ: bw=20.1MiB/s (21.1MB/s), 88.8KiB/s-9.99MiB/s (90.9kB/s-10.5MB/s), io=20.9MiB (21.9MB), run=1001-1036msec 00:14:40.280 WRITE: bw=25.5MiB/s (26.7MB/s), 1977KiB/s-10.4MiB/s (2024kB/s-10.9MB/s), io=26.4MiB (27.7MB), run=1001-1036msec 00:14:40.280 00:14:40.280 Disk stats (read/write): 00:14:40.280 nvme0n1: ios=52/512, merge=0/0, ticks=1067/83, in_queue=1150, util=98.90% 00:14:40.280 nvme0n2: ios=2071/2437, merge=0/0, ticks=1309/356, in_queue=1665, util=91.07% 00:14:40.280 nvme0n3: ios=1904/2048, merge=0/0, ticks=480/340, in_queue=820, util=90.73% 00:14:40.281 nvme0n4: ios=732/1024, merge=0/0, ticks=705/164, in_queue=869, util=95.06% 00:14:40.281 14:33:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:40.281 [global] 00:14:40.281 thread=1 00:14:40.281 invalidate=1 00:14:40.281 rw=write 00:14:40.281 time_based=1 00:14:40.281 runtime=1 00:14:40.281 ioengine=libaio 00:14:40.281 direct=1 00:14:40.281 bs=4096 00:14:40.281 iodepth=128 00:14:40.281 norandommap=0 00:14:40.281 numjobs=1 00:14:40.281 00:14:40.281 verify_dump=1 00:14:40.281 verify_backlog=512 00:14:40.281 verify_state_save=0 00:14:40.281 do_verify=1 00:14:40.281 verify=crc32c-intel 00:14:40.281 [job0] 00:14:40.281 filename=/dev/nvme0n1 00:14:40.281 [job1] 00:14:40.281 filename=/dev/nvme0n2 00:14:40.281 [job2] 00:14:40.281 filename=/dev/nvme0n3 00:14:40.281 [job3] 00:14:40.281 filename=/dev/nvme0n4 00:14:40.281 Could not set queue depth (nvme0n1) 00:14:40.281 Could not set queue depth (nvme0n2) 00:14:40.281 Could not set queue depth (nvme0n3) 00:14:40.281 Could not set queue depth (nvme0n4) 00:14:40.281 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:40.281 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:40.281 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:40.281 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:40.281 fio-3.35 00:14:40.281 Starting 4 threads 00:14:41.654 00:14:41.654 job0: (groupid=0, jobs=1): err= 0: pid=1484469: Wed Nov 20 14:33:53 2024 00:14:41.654 read: IOPS=5502, BW=21.5MiB/s (22.5MB/s)(21.6MiB/1006msec) 00:14:41.654 slat (nsec): min=1154, max=20187k, avg=89189.06, stdev=703754.65 00:14:41.654 clat (usec): min=2969, max=52960, avg=12055.76, stdev=6484.77 00:14:41.654 lat (usec): min=2978, max=52966, avg=12144.95, stdev=6528.59 00:14:41.654 clat percentiles (usec): 00:14:41.654 | 1.00th=[ 4228], 5.00th=[ 6390], 10.00th=[ 7570], 20.00th=[ 9241], 00:14:41.654 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10421], 60.00th=[11207], 00:14:41.654 | 70.00th=[12125], 80.00th=[13566], 90.00th=[17171], 95.00th=[19530], 00:14:41.654 | 99.00th=[50594], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:14:41.654 | 99.99th=[53216] 00:14:41.654 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:14:41.654 slat (nsec): min=1931, max=11302k, avg=70205.31, stdev=388646.78 00:14:41.654 clat (usec): min=2086, max=60471, avg=10616.90, stdev=5130.23 00:14:41.654 lat (usec): min=2098, max=60479, avg=10687.11, stdev=5147.09 00:14:41.654 clat percentiles (usec): 00:14:41.654 | 1.00th=[ 2966], 5.00th=[ 4621], 10.00th=[ 6259], 20.00th=[ 7963], 00:14:41.654 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10159], 00:14:41.654 | 70.00th=[10421], 80.00th=[12387], 90.00th=[13960], 95.00th=[17433], 00:14:41.654 | 99.00th=[36963], 99.50th=[41157], 99.90th=[60556], 99.95th=[60556], 00:14:41.654 | 99.99th=[60556] 00:14:41.654 bw ( KiB/s): min=21168, max=23888, per=31.04%, avg=22528.00, stdev=1923.33, samples=2 00:14:41.654 iops : min= 5292, max= 5972, avg=5632.00, stdev=480.83, samples=2 00:14:41.654 lat (msec) : 4=1.94%, 10=40.46%, 20=53.81%, 50=3.10%, 100=0.69% 00:14:41.654 cpu : usr=4.48%, sys=6.27%, ctx=676, majf=0, minf=1 00:14:41.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:41.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:41.654 issued rwts: total=5536,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.654 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:41.654 job1: (groupid=0, jobs=1): err= 0: pid=1484470: Wed Nov 20 14:33:53 2024 00:14:41.654 read: IOPS=5327, BW=20.8MiB/s (21.8MB/s)(21.0MiB/1007msec) 00:14:41.654 slat (nsec): min=1266, max=11168k, avg=99626.26, stdev=693335.61 00:14:41.654 clat (usec): min=3694, max=37392, avg=11939.14, stdev=3963.73 00:14:41.654 lat (usec): min=3699, max=37398, avg=12038.77, stdev=4012.48 00:14:41.654 clat percentiles (usec): 00:14:41.654 | 1.00th=[ 4752], 5.00th=[ 7570], 10.00th=[ 9241], 20.00th=[ 9765], 00:14:41.654 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10683], 60.00th=[11338], 00:14:41.654 | 70.00th=[12911], 80.00th=[13698], 90.00th=[16712], 95.00th=[18744], 00:14:41.654 | 99.00th=[29230], 99.50th=[32113], 99.90th=[36963], 99.95th=[37487], 00:14:41.654 | 99.99th=[37487] 00:14:41.654 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:14:41.654 slat (usec): min=2, max=10675, avg=77.58, stdev=356.61 00:14:41.654 clat (usec): min=1704, max=37376, avg=11230.28, stdev=5330.04 00:14:41.654 lat (usec): min=1714, max=37380, avg=11307.86, stdev=5366.84 00:14:41.654 clat percentiles (usec): 00:14:41.654 | 1.00th=[ 2900], 5.00th=[ 4359], 10.00th=[ 6390], 20.00th=[ 8586], 00:14:41.654 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:14:41.654 | 70.00th=[10552], 80.00th=[12125], 90.00th=[19268], 95.00th=[23200], 00:14:41.654 | 99.00th=[31327], 99.50th=[33817], 99.90th=[34866], 99.95th=[34866], 00:14:41.654 | 99.99th=[37487] 00:14:41.654 bw ( KiB/s): min=20464, max=24592, per=31.04%, avg=22528.00, stdev=2918.94, samples=2 00:14:41.654 iops : min= 5116, max= 6148, avg=5632.00, stdev=729.73, samples=2 00:14:41.654 lat (msec) : 2=0.17%, 4=2.06%, 10=29.37%, 20=61.97%, 50=6.43% 00:14:41.654 cpu : usr=4.57%, sys=5.57%, ctx=710, majf=0, minf=1 00:14:41.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:41.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:41.654 issued rwts: total=5365,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.654 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:41.654 job2: (groupid=0, jobs=1): err= 0: pid=1484471: Wed Nov 20 14:33:53 2024 00:14:41.654 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:14:41.654 slat (nsec): min=1103, max=17587k, avg=140716.80, stdev=981664.60 00:14:41.654 clat (usec): min=6268, max=44312, avg=16692.57, stdev=5523.10 00:14:41.654 lat (usec): min=6300, max=44338, avg=16833.29, stdev=5574.02 00:14:41.654 clat percentiles (usec): 00:14:41.654 | 1.00th=[ 8029], 5.00th=[11338], 10.00th=[12649], 20.00th=[13173], 00:14:41.654 | 30.00th=[13698], 40.00th=[14615], 50.00th=[15008], 60.00th=[15533], 00:14:41.654 | 70.00th=[16909], 80.00th=[19006], 90.00th=[25297], 95.00th=[29230], 00:14:41.654 | 99.00th=[39584], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:14:41.654 | 99.99th=[44303] 00:14:41.654 write: IOPS=2924, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1008msec); 0 zone resets 00:14:41.654 slat (nsec): min=1918, max=23133k, avg=210158.02, stdev=1331233.51 00:14:41.654 clat (usec): min=1950, max=81773, avg=28587.34, stdev=15329.47 00:14:41.655 lat (usec): min=1953, max=81801, avg=28797.50, stdev=15423.26 00:14:41.655 clat percentiles (usec): 00:14:41.655 | 1.00th=[ 3326], 5.00th=[ 7767], 10.00th=[10945], 20.00th=[14615], 00:14:41.655 | 30.00th=[19268], 40.00th=[21890], 50.00th=[23462], 60.00th=[29492], 00:14:41.655 | 70.00th=[35390], 80.00th=[45876], 90.00th=[50594], 95.00th=[57410], 00:14:41.655 | 99.00th=[64226], 99.50th=[68682], 99.90th=[78119], 99.95th=[78119], 00:14:41.655 | 99.99th=[81265] 00:14:41.655 bw ( KiB/s): min=11224, max=11336, per=15.54%, avg=11280.00, stdev=79.20, samples=2 00:14:41.655 iops : min= 2806, max= 2834, avg=2820.00, stdev=19.80, samples=2 00:14:41.655 lat (msec) : 2=0.18%, 4=0.58%, 10=4.52%, 20=51.05%, 50=37.56% 00:14:41.655 lat (msec) : 100=6.10% 00:14:41.655 cpu : usr=1.59%, sys=3.28%, ctx=261, majf=0, minf=1 00:14:41.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:14:41.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:41.655 issued rwts: total=2560,2948,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:41.655 job3: (groupid=0, jobs=1): err= 0: pid=1484472: Wed Nov 20 14:33:53 2024 00:14:41.655 read: IOPS=3793, BW=14.8MiB/s (15.5MB/s)(15.0MiB/1009msec) 00:14:41.655 slat (nsec): min=1382, max=16060k, avg=133655.44, stdev=879410.91 00:14:41.655 clat (usec): min=3600, max=63675, avg=15386.70, stdev=7543.17 00:14:41.655 lat (usec): min=4022, max=63688, avg=15520.35, stdev=7616.74 00:14:41.655 clat percentiles (usec): 00:14:41.655 | 1.00th=[ 6194], 5.00th=[10290], 10.00th=[10552], 20.00th=[11338], 00:14:41.655 | 30.00th=[11731], 40.00th=[12387], 50.00th=[13042], 60.00th=[13829], 00:14:41.655 | 70.00th=[16057], 80.00th=[17433], 90.00th=[22414], 95.00th=[30016], 00:14:41.655 | 99.00th=[50070], 99.50th=[57410], 99.90th=[63701], 99.95th=[63701], 00:14:41.655 | 99.99th=[63701] 00:14:41.655 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:14:41.655 slat (usec): min=2, max=10503, avg=111.59, stdev=571.47 00:14:41.655 clat (usec): min=1591, max=73089, avg=16828.43, stdev=11554.82 00:14:41.655 lat (usec): min=1626, max=73101, avg=16940.01, stdev=11615.95 00:14:41.655 clat percentiles (usec): 00:14:41.655 | 1.00th=[ 3294], 5.00th=[ 5932], 10.00th=[ 8586], 20.00th=[10552], 00:14:41.655 | 30.00th=[11338], 40.00th=[11600], 50.00th=[12125], 60.00th=[13960], 00:14:41.655 | 70.00th=[18220], 80.00th=[22414], 90.00th=[27919], 95.00th=[41157], 00:14:41.655 | 99.00th=[68682], 99.50th=[70779], 99.90th=[72877], 99.95th=[72877], 00:14:41.655 | 99.99th=[72877] 00:14:41.655 bw ( KiB/s): min=12288, max=20480, per=22.57%, avg=16384.00, stdev=5792.62, samples=2 00:14:41.655 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:14:41.655 lat (msec) : 2=0.01%, 4=0.95%, 10=8.71%, 20=71.30%, 50=17.05% 00:14:41.655 lat (msec) : 100=1.98% 00:14:41.655 cpu : usr=3.67%, sys=5.16%, ctx=491, majf=0, minf=2 00:14:41.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:41.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:41.655 issued rwts: total=3828,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:41.655 00:14:41.655 Run status group 0 (all jobs): 00:14:41.655 READ: bw=66.9MiB/s (70.2MB/s), 9.92MiB/s-21.5MiB/s (10.4MB/s-22.5MB/s), io=67.5MiB (70.8MB), run=1006-1009msec 00:14:41.655 WRITE: bw=70.9MiB/s (74.3MB/s), 11.4MiB/s-21.9MiB/s (12.0MB/s-22.9MB/s), io=71.5MiB (75.0MB), run=1006-1009msec 00:14:41.655 00:14:41.655 Disk stats (read/write): 00:14:41.655 nvme0n1: ios=4647/4791, merge=0/0, ticks=48460/45986, in_queue=94446, util=97.70% 00:14:41.655 nvme0n2: ios=4514/4608, merge=0/0, ticks=51927/52150, in_queue=104077, util=92.28% 00:14:41.655 nvme0n3: ios=2105/2336, merge=0/0, ticks=15868/28041, in_queue=43909, util=90.11% 00:14:41.655 nvme0n4: ios=3493/3584, merge=0/0, ticks=51827/52472, in_queue=104299, util=100.00% 00:14:41.655 14:33:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:41.655 [global] 00:14:41.655 thread=1 00:14:41.655 invalidate=1 00:14:41.655 rw=randwrite 00:14:41.655 time_based=1 00:14:41.655 runtime=1 00:14:41.655 ioengine=libaio 00:14:41.655 direct=1 00:14:41.655 bs=4096 00:14:41.655 iodepth=128 00:14:41.655 norandommap=0 00:14:41.655 numjobs=1 00:14:41.655 00:14:41.655 verify_dump=1 00:14:41.655 verify_backlog=512 00:14:41.655 verify_state_save=0 00:14:41.655 do_verify=1 00:14:41.655 verify=crc32c-intel 00:14:41.655 [job0] 00:14:41.655 filename=/dev/nvme0n1 00:14:41.655 [job1] 00:14:41.655 filename=/dev/nvme0n2 00:14:41.655 [job2] 00:14:41.655 filename=/dev/nvme0n3 00:14:41.655 [job3] 00:14:41.655 filename=/dev/nvme0n4 00:14:41.655 Could not set queue depth (nvme0n1) 00:14:41.655 Could not set queue depth (nvme0n2) 00:14:41.655 Could not set queue depth (nvme0n3) 00:14:41.655 Could not set queue depth (nvme0n4) 00:14:41.913 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:41.913 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:41.913 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:41.913 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:41.913 fio-3.35 00:14:41.913 Starting 4 threads 00:14:43.314 00:14:43.314 job0: (groupid=0, jobs=1): err= 0: pid=1484840: Wed Nov 20 14:33:54 2024 00:14:43.314 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:14:43.314 slat (nsec): min=1402, max=5672.8k, avg=95841.27, stdev=530299.77 00:14:43.314 clat (usec): min=7150, max=22562, avg=12090.16, stdev=1887.01 00:14:43.314 lat (usec): min=7159, max=26384, avg=12186.00, stdev=1937.43 00:14:43.314 clat percentiles (usec): 00:14:43.314 | 1.00th=[ 7832], 5.00th=[ 8848], 10.00th=[10290], 20.00th=[11469], 00:14:43.314 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:14:43.314 | 70.00th=[12256], 80.00th=[13173], 90.00th=[14615], 95.00th=[15401], 00:14:43.314 | 99.00th=[18744], 99.50th=[19792], 99.90th=[22414], 99.95th=[22414], 00:14:43.314 | 99.99th=[22676] 00:14:43.314 write: IOPS=4974, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1005msec); 0 zone resets 00:14:43.314 slat (usec): min=2, max=25947, avg=106.16, stdev=741.89 00:14:43.314 clat (usec): min=4267, max=66324, avg=14007.55, stdev=7222.33 00:14:43.314 lat (usec): min=4884, max=66358, avg=14113.71, stdev=7287.07 00:14:43.314 clat percentiles (usec): 00:14:43.314 | 1.00th=[ 7111], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11076], 00:14:43.314 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:14:43.314 | 70.00th=[11994], 80.00th=[13829], 90.00th=[21103], 95.00th=[32375], 00:14:43.314 | 99.00th=[51643], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:14:43.314 | 99.99th=[66323] 00:14:43.314 bw ( KiB/s): min=19000, max=19976, per=24.20%, avg=19488.00, stdev=690.14, samples=2 00:14:43.314 iops : min= 4750, max= 4994, avg=4872.00, stdev=172.53, samples=2 00:14:43.314 lat (msec) : 10=7.54%, 20=86.68%, 50=5.12%, 100=0.67% 00:14:43.314 cpu : usr=3.09%, sys=5.68%, ctx=648, majf=0, minf=1 00:14:43.314 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:43.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:43.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:43.314 issued rwts: total=4608,4999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:43.314 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:43.314 job1: (groupid=0, jobs=1): err= 0: pid=1484841: Wed Nov 20 14:33:54 2024 00:14:43.314 read: IOPS=5539, BW=21.6MiB/s (22.7MB/s)(21.7MiB/1005msec) 00:14:43.314 slat (nsec): min=1155, max=11816k, avg=89153.43, stdev=619529.96 00:14:43.314 clat (usec): min=3755, max=31063, avg=12141.50, stdev=3480.45 00:14:43.314 lat (usec): min=3760, max=31230, avg=12230.66, stdev=3512.26 00:14:43.314 clat percentiles (usec): 00:14:43.314 | 1.00th=[ 4359], 5.00th=[ 7570], 10.00th=[ 8848], 20.00th=[10028], 00:14:43.314 | 30.00th=[11076], 40.00th=[11600], 50.00th=[11731], 60.00th=[11994], 00:14:43.314 | 70.00th=[12125], 80.00th=[13698], 90.00th=[16319], 95.00th=[19006], 00:14:43.314 | 99.00th=[26608], 99.50th=[26608], 99.90th=[26608], 99.95th=[26608], 00:14:43.314 | 99.99th=[31065] 00:14:43.314 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:14:43.314 slat (nsec): min=1948, max=9329.4k, avg=76384.71, stdev=460362.82 00:14:43.314 clat (usec): min=1024, max=26129, avg=10561.51, stdev=2407.55 00:14:43.314 lat (usec): min=1032, max=26764, avg=10637.90, stdev=2451.56 00:14:43.314 clat percentiles (usec): 00:14:43.314 | 1.00th=[ 3392], 5.00th=[ 5604], 10.00th=[ 7439], 20.00th=[ 9634], 00:14:43.314 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11207], 60.00th=[11338], 00:14:43.314 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12125], 95.00th=[12780], 00:14:43.314 | 99.00th=[17171], 99.50th=[20841], 99.90th=[26084], 99.95th=[26084], 00:14:43.314 | 99.99th=[26084] 00:14:43.314 bw ( KiB/s): min=22376, max=22680, per=27.97%, avg=22528.00, stdev=214.96, samples=2 00:14:43.314 iops : min= 5594, max= 5670, avg=5632.00, stdev=53.74, samples=2 00:14:43.314 lat (msec) : 2=0.27%, 4=1.07%, 10=20.98%, 20=75.49%, 50=2.19% 00:14:43.314 cpu : usr=4.48%, sys=6.67%, ctx=532, majf=0, minf=1 00:14:43.314 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:43.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:43.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:43.314 issued rwts: total=5567,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:43.314 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:43.314 job2: (groupid=0, jobs=1): err= 0: pid=1484842: Wed Nov 20 14:33:54 2024 00:14:43.314 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:14:43.314 slat (nsec): min=1158, max=12257k, avg=106265.97, stdev=707025.57 00:14:43.314 clat (usec): min=4966, max=25455, avg=13647.54, stdev=2843.59 00:14:43.314 lat (usec): min=4974, max=25459, avg=13753.80, stdev=2896.05 00:14:43.314 clat percentiles (usec): 00:14:43.314 | 1.00th=[ 5276], 5.00th=[ 9503], 10.00th=[10683], 20.00th=[12125], 00:14:43.314 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13566], 00:14:43.314 | 70.00th=[13829], 80.00th=[14615], 90.00th=[17433], 95.00th=[19268], 00:14:43.314 | 99.00th=[23200], 99.50th=[23987], 99.90th=[25560], 99.95th=[25560], 00:14:43.314 | 99.99th=[25560] 00:14:43.314 write: IOPS=5007, BW=19.6MiB/s (20.5MB/s)(19.7MiB/1006msec); 0 zone resets 00:14:43.314 slat (usec): min=2, max=12040, avg=93.54, stdev=553.54 00:14:43.314 clat (usec): min=1730, max=25605, avg=12795.29, stdev=3064.87 00:14:43.314 lat (usec): min=3402, max=27332, avg=12888.83, stdev=3101.74 00:14:43.314 clat percentiles (usec): 00:14:43.314 | 1.00th=[ 4948], 5.00th=[ 7373], 10.00th=[ 9241], 20.00th=[10945], 00:14:43.314 | 30.00th=[12125], 40.00th=[12649], 50.00th=[13042], 60.00th=[13173], 00:14:43.314 | 70.00th=[13435], 80.00th=[13960], 90.00th=[16450], 95.00th=[18482], 00:14:43.314 | 99.00th=[22676], 99.50th=[22676], 99.90th=[23987], 99.95th=[25560], 00:14:43.314 | 99.99th=[25560] 00:14:43.314 bw ( KiB/s): min=18800, max=20480, per=24.38%, avg=19640.00, stdev=1187.94, samples=2 00:14:43.314 iops : min= 4700, max= 5120, avg=4910.00, stdev=296.98, samples=2 00:14:43.314 lat (msec) : 2=0.01%, 4=0.10%, 10=9.70%, 20=86.77%, 50=3.41% 00:14:43.314 cpu : usr=2.89%, sys=6.47%, ctx=473, majf=0, minf=2 00:14:43.314 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:43.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:43.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:43.314 issued rwts: total=4608,5038,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:43.314 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:43.314 job3: (groupid=0, jobs=1): err= 0: pid=1484844: Wed Nov 20 14:33:54 2024 00:14:43.314 read: IOPS=4534, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1007msec) 00:14:43.314 slat (nsec): min=1475, max=13459k, avg=113420.75, stdev=709233.83 00:14:43.314 clat (usec): min=3865, max=35781, avg=14143.05, stdev=3544.39 00:14:43.314 lat (usec): min=7467, max=35787, avg=14256.47, stdev=3584.35 00:14:43.314 clat percentiles (usec): 00:14:43.314 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[12780], 00:14:43.314 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13304], 60.00th=[13566], 00:14:43.314 | 70.00th=[14091], 80.00th=[15926], 90.00th=[17433], 95.00th=[20317], 00:14:43.314 | 99.00th=[30540], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:14:43.314 | 99.99th=[35914] 00:14:43.314 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:14:43.315 slat (usec): min=2, max=12676, avg=99.29, stdev=533.81 00:14:43.315 clat (usec): min=6980, max=27895, avg=13700.60, stdev=2695.07 00:14:43.315 lat (usec): min=6989, max=27917, avg=13799.88, stdev=2744.16 00:14:43.315 clat percentiles (usec): 00:14:43.315 | 1.00th=[ 8356], 5.00th=[ 9241], 10.00th=[11338], 20.00th=[12649], 00:14:43.315 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:14:43.315 | 70.00th=[13698], 80.00th=[14091], 90.00th=[16581], 95.00th=[18482], 00:14:43.315 | 99.00th=[25297], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:14:43.315 | 99.99th=[27919] 00:14:43.315 bw ( KiB/s): min=17680, max=19184, per=22.88%, avg=18432.00, stdev=1063.49, samples=2 00:14:43.315 iops : min= 4420, max= 4796, avg=4608.00, stdev=265.87, samples=2 00:14:43.315 lat (msec) : 4=0.01%, 10=5.91%, 20=89.95%, 50=4.13% 00:14:43.315 cpu : usr=2.09%, sys=6.56%, ctx=546, majf=0, minf=2 00:14:43.315 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:43.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:43.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:43.315 issued rwts: total=4566,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:43.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:43.315 00:14:43.315 Run status group 0 (all jobs): 00:14:43.315 READ: bw=75.1MiB/s (78.7MB/s), 17.7MiB/s-21.6MiB/s (18.6MB/s-22.7MB/s), io=75.6MiB (79.3MB), run=1005-1007msec 00:14:43.315 WRITE: bw=78.7MiB/s (82.5MB/s), 17.9MiB/s-21.9MiB/s (18.7MB/s-23.0MB/s), io=79.2MiB (83.1MB), run=1005-1007msec 00:14:43.315 00:14:43.315 Disk stats (read/write): 00:14:43.315 nvme0n1: ios=3869/4096, merge=0/0, ticks=23754/28790, in_queue=52544, util=99.90% 00:14:43.315 nvme0n2: ios=4633/4758, merge=0/0, ticks=45412/35856, in_queue=81268, util=98.98% 00:14:43.315 nvme0n3: ios=4099/4096, merge=0/0, ticks=39922/36706, in_queue=76628, util=99.27% 00:14:43.315 nvme0n4: ios=3710/4096, merge=0/0, ticks=29441/29579, in_queue=59020, util=89.64% 00:14:43.315 14:33:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:43.315 14:33:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1485078 00:14:43.315 14:33:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:43.315 14:33:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:43.315 [global] 00:14:43.315 thread=1 00:14:43.315 invalidate=1 00:14:43.315 rw=read 00:14:43.315 time_based=1 00:14:43.315 runtime=10 00:14:43.315 ioengine=libaio 00:14:43.315 direct=1 00:14:43.315 bs=4096 00:14:43.315 iodepth=1 00:14:43.315 norandommap=1 00:14:43.315 numjobs=1 00:14:43.315 00:14:43.315 [job0] 00:14:43.315 filename=/dev/nvme0n1 00:14:43.315 [job1] 00:14:43.315 filename=/dev/nvme0n2 00:14:43.315 [job2] 00:14:43.315 filename=/dev/nvme0n3 00:14:43.315 [job3] 00:14:43.315 filename=/dev/nvme0n4 00:14:43.315 Could not set queue depth (nvme0n1) 00:14:43.315 Could not set queue depth (nvme0n2) 00:14:43.315 Could not set queue depth (nvme0n3) 00:14:43.315 Could not set queue depth (nvme0n4) 00:14:43.572 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:43.572 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:43.572 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:43.572 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:43.572 fio-3.35 00:14:43.572 Starting 4 threads 00:14:46.096 14:33:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:46.352 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=21372928, buflen=4096 00:14:46.352 fio: pid=1485221, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:46.352 14:33:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:46.610 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=49057792, buflen=4096 00:14:46.610 fio: pid=1485220, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:46.610 14:33:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:46.610 14:33:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:46.868 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=42340352, buflen=4096 00:14:46.868 fio: pid=1485218, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:46.868 14:33:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:46.868 14:33:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:46.868 14:33:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:46.868 14:33:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:47.126 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=548864, buflen=4096 00:14:47.126 fio: pid=1485219, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:47.126 00:14:47.126 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1485218: Wed Nov 20 14:33:58 2024 00:14:47.126 read: IOPS=3338, BW=13.0MiB/s (13.7MB/s)(40.4MiB/3097msec) 00:14:47.126 slat (usec): min=3, max=11648, avg= 9.21, stdev=141.20 00:14:47.126 clat (usec): min=172, max=41509, avg=287.29, stdev=1597.19 00:14:47.126 lat (usec): min=179, max=41513, avg=296.51, stdev=1603.64 00:14:47.126 clat percentiles (usec): 00:14:47.126 | 1.00th=[ 184], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 210], 00:14:47.126 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:14:47.126 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 253], 00:14:47.126 | 99.00th=[ 302], 99.50th=[ 379], 99.90th=[40633], 99.95th=[40633], 00:14:47.126 | 99.99th=[41681] 00:14:47.126 bw ( KiB/s): min= 4312, max=17336, per=40.60%, avg=13473.00, stdev=5160.33, samples=6 00:14:47.126 iops : min= 1078, max= 4334, avg=3368.17, stdev=1290.13, samples=6 00:14:47.126 lat (usec) : 250=93.84%, 500=5.95%, 750=0.03% 00:14:47.126 lat (msec) : 2=0.01%, 4=0.01%, 50=0.15% 00:14:47.126 cpu : usr=0.87%, sys=2.94%, ctx=10341, majf=0, minf=1 00:14:47.126 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:47.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.126 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.126 issued rwts: total=10338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:47.126 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:47.126 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1485219: Wed Nov 20 14:33:58 2024 00:14:47.126 read: IOPS=40, BW=161KiB/s (165kB/s)(536KiB/3335msec) 00:14:47.126 slat (usec): min=3, max=3818, avg=45.98, stdev=327.25 00:14:47.126 clat (usec): min=203, max=44910, avg=24642.07, stdev=20016.03 00:14:47.126 lat (usec): min=222, max=45020, avg=24688.23, stdev=20044.92 00:14:47.126 clat percentiles (usec): 00:14:47.126 | 1.00th=[ 215], 5.00th=[ 241], 10.00th=[ 269], 20.00th=[ 326], 00:14:47.126 | 30.00th=[ 441], 40.00th=[ 1319], 50.00th=[40633], 60.00th=[41157], 00:14:47.126 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:47.126 | 99.00th=[41681], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:14:47.126 | 99.99th=[44827] 00:14:47.126 bw ( KiB/s): min= 94, max= 352, per=0.50%, avg=167.67, stdev=93.06, samples=6 00:14:47.126 iops : min= 23, max= 88, avg=41.83, stdev=23.34, samples=6 00:14:47.126 lat (usec) : 250=6.67%, 500=26.67%, 750=5.19%, 1000=0.74% 00:14:47.126 lat (msec) : 2=0.74%, 50=59.26% 00:14:47.126 cpu : usr=0.00%, sys=0.15%, ctx=139, majf=0, minf=2 00:14:47.126 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:47.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.126 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.126 issued rwts: total=135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:47.126 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:47.126 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1485220: Wed Nov 20 14:33:58 2024 00:14:47.126 read: IOPS=4143, BW=16.2MiB/s (17.0MB/s)(46.8MiB/2891msec) 00:14:47.126 slat (usec): min=5, max=11716, avg= 9.13, stdev=124.94 00:14:47.126 clat (usec): min=155, max=41464, avg=229.50, stdev=791.89 00:14:47.127 lat (usec): min=162, max=41472, avg=238.63, stdev=801.81 00:14:47.127 clat percentiles (usec): 00:14:47.127 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 196], 00:14:47.127 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:14:47.127 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 239], 95.00th=[ 273], 00:14:47.127 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 375], 99.95th=[ 8029], 00:14:47.127 | 99.99th=[41157] 00:14:47.127 bw ( KiB/s): min=11544, max=19752, per=51.14%, avg=16969.60, stdev=3155.70, samples=5 00:14:47.127 iops : min= 2886, max= 4938, avg=4242.40, stdev=788.92, samples=5 00:14:47.127 lat (usec) : 250=92.02%, 500=7.91%, 750=0.01% 00:14:47.127 lat (msec) : 10=0.01%, 50=0.04% 00:14:47.127 cpu : usr=0.97%, sys=3.84%, ctx=11981, majf=0, minf=2 00:14:47.127 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:47.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.127 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.127 issued rwts: total=11978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:47.127 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:47.127 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1485221: Wed Nov 20 14:33:58 2024 00:14:47.127 read: IOPS=1948, BW=7794KiB/s (7981kB/s)(20.4MiB/2678msec) 00:14:47.127 slat (nsec): min=8521, max=39160, avg=9632.80, stdev=1804.53 00:14:47.127 clat (usec): min=167, max=41125, avg=497.26, stdev=3324.27 00:14:47.127 lat (usec): min=176, max=41148, avg=506.89, stdev=3325.27 00:14:47.127 clat percentiles (usec): 00:14:47.127 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 208], 00:14:47.127 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 225], 00:14:47.127 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 258], 00:14:47.127 | 99.00th=[ 310], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:47.127 | 99.99th=[41157] 00:14:47.127 bw ( KiB/s): min= 96, max=17032, per=22.26%, avg=7387.20, stdev=8844.24, samples=5 00:14:47.127 iops : min= 24, max= 4258, avg=1846.80, stdev=2211.06, samples=5 00:14:47.127 lat (usec) : 250=92.58%, 500=6.65%, 750=0.02%, 1000=0.02% 00:14:47.127 lat (msec) : 2=0.02%, 10=0.02%, 50=0.67% 00:14:47.127 cpu : usr=1.38%, sys=3.29%, ctx=5220, majf=0, minf=1 00:14:47.127 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:47.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.127 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.127 issued rwts: total=5219,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:47.127 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:47.127 00:14:47.127 Run status group 0 (all jobs): 00:14:47.127 READ: bw=32.4MiB/s (34.0MB/s), 161KiB/s-16.2MiB/s (165kB/s-17.0MB/s), io=108MiB (113MB), run=2678-3335msec 00:14:47.127 00:14:47.127 Disk stats (read/write): 00:14:47.127 nvme0n1: ios=10244/0, merge=0/0, ticks=2894/0, in_queue=2894, util=93.59% 00:14:47.127 nvme0n2: ios=133/0, merge=0/0, ticks=3263/0, in_queue=3263, util=95.21% 00:14:47.127 nvme0n3: ios=11721/0, merge=0/0, ticks=2827/0, in_queue=2827, util=99.14% 00:14:47.127 nvme0n4: ios=4778/0, merge=0/0, ticks=2455/0, in_queue=2455, util=96.38% 00:14:47.127 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:47.127 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:47.392 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:47.392 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:47.648 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:47.648 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:47.905 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:47.905 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:47.905 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:47.905 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1485078 00:14:47.905 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:47.905 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:48.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.162 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:48.162 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:14:48.162 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:48.162 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.162 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:48.162 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.162 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:14:48.162 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:48.162 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:48.162 nvmf hotplug test: fio failed as expected 00:14:48.162 14:33:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:48.419 rmmod nvme_tcp 00:14:48.419 rmmod nvme_fabrics 00:14:48.419 rmmod nvme_keyring 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1482145 ']' 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1482145 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1482145 ']' 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1482145 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1482145 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1482145' 00:14:48.419 killing process with pid 1482145 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1482145 00:14:48.419 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1482145 00:14:48.678 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:48.678 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:48.678 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:48.678 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:14:48.678 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:14:48.678 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:48.678 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:48.678 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:48.678 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:48.678 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.678 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.678 14:34:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:51.213 00:14:51.213 real 0m26.944s 00:14:51.213 user 1m46.200s 00:14:51.213 sys 0m8.881s 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.213 ************************************ 00:14:51.213 END TEST nvmf_fio_target 00:14:51.213 ************************************ 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:51.213 ************************************ 00:14:51.213 START TEST nvmf_bdevio 00:14:51.213 ************************************ 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:51.213 * Looking for test storage... 00:14:51.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:14:51.213 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:51.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.214 --rc genhtml_branch_coverage=1 00:14:51.214 --rc genhtml_function_coverage=1 00:14:51.214 --rc genhtml_legend=1 00:14:51.214 --rc geninfo_all_blocks=1 00:14:51.214 --rc geninfo_unexecuted_blocks=1 00:14:51.214 00:14:51.214 ' 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:51.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.214 --rc genhtml_branch_coverage=1 00:14:51.214 --rc genhtml_function_coverage=1 00:14:51.214 --rc genhtml_legend=1 00:14:51.214 --rc geninfo_all_blocks=1 00:14:51.214 --rc geninfo_unexecuted_blocks=1 00:14:51.214 00:14:51.214 ' 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:51.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.214 --rc genhtml_branch_coverage=1 00:14:51.214 --rc genhtml_function_coverage=1 00:14:51.214 --rc genhtml_legend=1 00:14:51.214 --rc geninfo_all_blocks=1 00:14:51.214 --rc geninfo_unexecuted_blocks=1 00:14:51.214 00:14:51.214 ' 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:51.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.214 --rc genhtml_branch_coverage=1 00:14:51.214 --rc genhtml_function_coverage=1 00:14:51.214 --rc genhtml_legend=1 00:14:51.214 --rc geninfo_all_blocks=1 00:14:51.214 --rc geninfo_unexecuted_blocks=1 00:14:51.214 00:14:51.214 ' 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:51.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:14:51.214 14:34:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:57.785 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:57.785 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:57.785 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:57.786 Found net devices under 0000:86:00.0: cvl_0_0 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:57.786 Found net devices under 0000:86:00.1: cvl_0_1 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:57.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:14:57.786 00:14:57.786 --- 10.0.0.2 ping statistics --- 00:14:57.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.786 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:57.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:14:57.786 00:14:57.786 --- 10.0.0.1 ping statistics --- 00:14:57.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.786 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1489555 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1489555 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1489555 ']' 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:57.786 14:34:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:57.786 [2024-11-20 14:34:08.906364] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:14:57.786 [2024-11-20 14:34:08.906418] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.786 [2024-11-20 14:34:08.989657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.786 [2024-11-20 14:34:09.032326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.786 [2024-11-20 14:34:09.032367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.786 [2024-11-20 14:34:09.032374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.786 [2024-11-20 14:34:09.032380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.786 [2024-11-20 14:34:09.032386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.786 [2024-11-20 14:34:09.033905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:57.786 [2024-11-20 14:34:09.034024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.786 [2024-11-20 14:34:09.033938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:57.786 [2024-11-20 14:34:09.034024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:57.786 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.786 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:14:57.786 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:57.786 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:57.786 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:57.786 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.786 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:57.786 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.786 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:57.786 [2024-11-20 14:34:09.175840] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.786 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.786 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:57.786 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.786 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:57.786 Malloc0 00:14:57.786 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.786 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:57.786 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.786 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:57.787 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.787 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:57.787 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.787 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:57.787 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.787 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:57.787 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.787 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:57.787 [2024-11-20 14:34:09.242078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:57.787 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.787 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:57.787 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:57.787 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:14:57.787 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:14:57.787 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:57.787 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:57.787 { 00:14:57.787 "params": { 00:14:57.787 "name": "Nvme$subsystem", 00:14:57.787 "trtype": "$TEST_TRANSPORT", 00:14:57.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:57.787 "adrfam": "ipv4", 00:14:57.787 "trsvcid": "$NVMF_PORT", 00:14:57.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:57.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:57.787 "hdgst": ${hdgst:-false}, 00:14:57.787 "ddgst": ${ddgst:-false} 00:14:57.787 }, 00:14:57.787 "method": "bdev_nvme_attach_controller" 00:14:57.787 } 00:14:57.787 EOF 00:14:57.787 )") 00:14:57.787 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:14:57.787 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:14:57.787 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:14:57.787 14:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:57.787 "params": { 00:14:57.787 "name": "Nvme1", 00:14:57.787 "trtype": "tcp", 00:14:57.787 "traddr": "10.0.0.2", 00:14:57.787 "adrfam": "ipv4", 00:14:57.787 "trsvcid": "4420", 00:14:57.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:57.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:57.787 "hdgst": false, 00:14:57.787 "ddgst": false 00:14:57.787 }, 00:14:57.787 "method": "bdev_nvme_attach_controller" 00:14:57.787 }' 00:14:57.787 [2024-11-20 14:34:09.292595] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:14:57.787 [2024-11-20 14:34:09.292638] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489716 ] 00:14:57.787 [2024-11-20 14:34:09.368594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:57.787 [2024-11-20 14:34:09.412329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.787 [2024-11-20 14:34:09.412437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.787 [2024-11-20 14:34:09.412438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.787 I/O targets: 00:14:57.787 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:57.787 00:14:57.787 00:14:57.787 CUnit - A unit testing framework for C - Version 2.1-3 00:14:57.787 http://cunit.sourceforge.net/ 00:14:57.787 00:14:57.787 00:14:57.787 Suite: bdevio tests on: Nvme1n1 00:14:57.787 Test: blockdev write read block ...passed 00:14:58.044 Test: blockdev write zeroes read block ...passed 00:14:58.044 Test: blockdev write zeroes read no split ...passed 00:14:58.044 Test: blockdev write zeroes read split ...passed 00:14:58.044 Test: blockdev write zeroes read split partial ...passed 00:14:58.044 Test: blockdev reset ...[2024-11-20 14:34:09.848046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:58.044 [2024-11-20 14:34:09.848112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1859340 (9): Bad file descriptor 00:14:58.044 [2024-11-20 14:34:09.990486] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:58.044 passed 00:14:58.301 Test: blockdev write read 8 blocks ...passed 00:14:58.301 Test: blockdev write read size > 128k ...passed 00:14:58.301 Test: blockdev write read invalid size ...passed 00:14:58.301 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:58.301 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:58.301 Test: blockdev write read max offset ...passed 00:14:58.301 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:58.301 Test: blockdev writev readv 8 blocks ...passed 00:14:58.301 Test: blockdev writev readv 30 x 1block ...passed 00:14:58.301 Test: blockdev writev readv block ...passed 00:14:58.301 Test: blockdev writev readv size > 128k ...passed 00:14:58.301 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:58.301 Test: blockdev comparev and writev ...[2024-11-20 14:34:10.199832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:58.301 [2024-11-20 14:34:10.199863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:58.301 [2024-11-20 14:34:10.199877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:58.301 [2024-11-20 14:34:10.199885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:58.301 [2024-11-20 14:34:10.200135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:58.301 [2024-11-20 14:34:10.200146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:58.301 [2024-11-20 14:34:10.200158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:58.301 [2024-11-20 14:34:10.200165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:58.301 [2024-11-20 14:34:10.200418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:58.301 [2024-11-20 14:34:10.200430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:58.301 [2024-11-20 14:34:10.200443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:58.301 [2024-11-20 14:34:10.200450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:58.301 [2024-11-20 14:34:10.200681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:58.301 [2024-11-20 14:34:10.200691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:58.301 [2024-11-20 14:34:10.200703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:58.301 [2024-11-20 14:34:10.200710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:58.301 passed 00:14:58.559 Test: blockdev nvme passthru rw ...passed 00:14:58.559 Test: blockdev nvme passthru vendor specific ...[2024-11-20 14:34:10.282332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:58.559 [2024-11-20 14:34:10.282356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:58.559 [2024-11-20 14:34:10.282475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:58.559 [2024-11-20 14:34:10.282485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:58.559 [2024-11-20 14:34:10.282592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:58.559 [2024-11-20 14:34:10.282602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:58.559 [2024-11-20 14:34:10.282706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:58.559 [2024-11-20 14:34:10.282716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:58.559 passed 00:14:58.559 Test: blockdev nvme admin passthru ...passed 00:14:58.559 Test: blockdev copy ...passed 00:14:58.559 00:14:58.559 Run Summary: Type Total Ran Passed Failed Inactive 00:14:58.559 suites 1 1 n/a 0 0 00:14:58.559 tests 23 23 23 0 0 00:14:58.559 asserts 152 152 152 0 n/a 00:14:58.559 00:14:58.559 Elapsed time = 1.301 seconds 00:14:58.559 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:58.559 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.559 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:58.559 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.559 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:58.559 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:58.559 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:58.559 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:14:58.559 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:58.559 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:14:58.559 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:58.559 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:58.559 rmmod nvme_tcp 00:14:58.818 rmmod nvme_fabrics 00:14:58.818 rmmod nvme_keyring 00:14:58.818 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:58.818 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:14:58.818 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:14:58.818 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1489555 ']' 00:14:58.818 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1489555 00:14:58.818 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1489555 ']' 00:14:58.818 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1489555 00:14:58.818 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:14:58.818 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:58.818 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1489555 00:14:58.818 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:58.818 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:58.818 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1489555' 00:14:58.818 killing process with pid 1489555 00:14:58.818 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1489555 00:14:58.818 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1489555 00:14:59.076 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:59.076 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:59.076 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:59.076 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:14:59.076 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:14:59.076 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:59.076 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:14:59.076 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:59.076 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:59.076 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.077 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.077 14:34:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.983 14:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:00.983 00:15:00.983 real 0m10.243s 00:15:00.983 user 0m11.235s 00:15:00.983 sys 0m5.079s 00:15:00.983 14:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.983 14:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:00.983 ************************************ 00:15:00.983 END TEST nvmf_bdevio 00:15:00.983 ************************************ 00:15:00.983 14:34:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:00.983 00:15:00.983 real 4m38.682s 00:15:00.983 user 10m27.865s 00:15:00.983 sys 1m38.936s 00:15:00.983 14:34:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.983 14:34:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:00.983 ************************************ 00:15:00.983 END TEST nvmf_target_core 00:15:00.983 ************************************ 00:15:01.243 14:34:12 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:01.243 14:34:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:01.243 14:34:12 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.243 14:34:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:01.243 ************************************ 00:15:01.243 START TEST nvmf_target_extra 00:15:01.243 ************************************ 00:15:01.243 14:34:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:01.243 * Looking for test storage... 00:15:01.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:01.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.243 --rc genhtml_branch_coverage=1 00:15:01.243 --rc genhtml_function_coverage=1 00:15:01.243 --rc genhtml_legend=1 00:15:01.243 --rc geninfo_all_blocks=1 00:15:01.243 --rc geninfo_unexecuted_blocks=1 00:15:01.243 00:15:01.243 ' 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:01.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.243 --rc genhtml_branch_coverage=1 00:15:01.243 --rc genhtml_function_coverage=1 00:15:01.243 --rc genhtml_legend=1 00:15:01.243 --rc geninfo_all_blocks=1 00:15:01.243 --rc geninfo_unexecuted_blocks=1 00:15:01.243 00:15:01.243 ' 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:01.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.243 --rc genhtml_branch_coverage=1 00:15:01.243 --rc genhtml_function_coverage=1 00:15:01.243 --rc genhtml_legend=1 00:15:01.243 --rc geninfo_all_blocks=1 00:15:01.243 --rc geninfo_unexecuted_blocks=1 00:15:01.243 00:15:01.243 ' 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:01.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.243 --rc genhtml_branch_coverage=1 00:15:01.243 --rc genhtml_function_coverage=1 00:15:01.243 --rc genhtml_legend=1 00:15:01.243 --rc geninfo_all_blocks=1 00:15:01.243 --rc geninfo_unexecuted_blocks=1 00:15:01.243 00:15:01.243 ' 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.243 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:01.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:01.244 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:01.505 ************************************ 00:15:01.505 START TEST nvmf_example 00:15:01.505 ************************************ 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:01.505 * Looking for test storage... 00:15:01.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:15:01.505 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:01.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.506 --rc genhtml_branch_coverage=1 00:15:01.506 --rc genhtml_function_coverage=1 00:15:01.506 --rc genhtml_legend=1 00:15:01.506 --rc geninfo_all_blocks=1 00:15:01.506 --rc geninfo_unexecuted_blocks=1 00:15:01.506 00:15:01.506 ' 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:01.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.506 --rc genhtml_branch_coverage=1 00:15:01.506 --rc genhtml_function_coverage=1 00:15:01.506 --rc genhtml_legend=1 00:15:01.506 --rc geninfo_all_blocks=1 00:15:01.506 --rc geninfo_unexecuted_blocks=1 00:15:01.506 00:15:01.506 ' 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:01.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.506 --rc genhtml_branch_coverage=1 00:15:01.506 --rc genhtml_function_coverage=1 00:15:01.506 --rc genhtml_legend=1 00:15:01.506 --rc geninfo_all_blocks=1 00:15:01.506 --rc geninfo_unexecuted_blocks=1 00:15:01.506 00:15:01.506 ' 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:01.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.506 --rc genhtml_branch_coverage=1 00:15:01.506 --rc genhtml_function_coverage=1 00:15:01.506 --rc genhtml_legend=1 00:15:01.506 --rc geninfo_all_blocks=1 00:15:01.506 --rc geninfo_unexecuted_blocks=1 00:15:01.506 00:15:01.506 ' 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:01.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:15:01.506 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:08.077 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:08.077 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:15:08.077 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:08.077 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:08.077 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:08.077 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:08.078 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:08.078 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:08.078 Found net devices under 0000:86:00.0: cvl_0_0 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:08.078 Found net devices under 0000:86:00.1: cvl_0_1 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:08.078 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:08.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:15:08.078 00:15:08.079 --- 10.0.0.2 ping statistics --- 00:15:08.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.079 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:08.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:15:08.079 00:15:08.079 --- 10.0.0.1 ping statistics --- 00:15:08.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.079 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1493534 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1493534 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1493534 ']' 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.079 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:15:08.645 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:20.838 Initializing NVMe Controllers 00:15:20.838 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:20.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:20.838 Initialization complete. Launching workers. 00:15:20.838 ======================================================== 00:15:20.838 Latency(us) 00:15:20.838 Device Information : IOPS MiB/s Average min max 00:15:20.838 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17871.50 69.81 3580.66 534.01 19018.36 00:15:20.838 ======================================================== 00:15:20.838 Total : 17871.50 69.81 3580.66 534.01 19018.36 00:15:20.838 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:20.838 rmmod nvme_tcp 00:15:20.838 rmmod nvme_fabrics 00:15:20.838 rmmod nvme_keyring 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1493534 ']' 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1493534 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1493534 ']' 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1493534 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1493534 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1493534' 00:15:20.838 killing process with pid 1493534 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1493534 00:15:20.838 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1493534 00:15:20.838 nvmf threads initialize successfully 00:15:20.838 bdev subsystem init successfully 00:15:20.838 created a nvmf target service 00:15:20.838 create targets's poll groups done 00:15:20.838 all subsystems of target started 00:15:20.838 nvmf target is running 00:15:20.839 all subsystems of target stopped 00:15:20.839 destroy targets's poll groups done 00:15:20.839 destroyed the nvmf target service 00:15:20.839 bdev subsystem finish successfully 00:15:20.839 nvmf threads destroy successfully 00:15:20.839 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:20.839 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:20.839 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:20.839 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:15:20.839 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:15:20.839 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:15:20.839 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:20.839 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:20.839 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:20.839 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.839 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:20.839 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.098 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:21.098 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:15:21.098 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:21.098 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:21.098 00:15:21.098 real 0m19.766s 00:15:21.098 user 0m45.840s 00:15:21.098 sys 0m6.088s 00:15:21.098 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.098 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:21.098 ************************************ 00:15:21.098 END TEST nvmf_example 00:15:21.098 ************************************ 00:15:21.098 14:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:21.098 14:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:21.098 14:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.098 14:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:21.359 ************************************ 00:15:21.359 START TEST nvmf_filesystem 00:15:21.359 ************************************ 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:21.359 * Looking for test storage... 00:15:21.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:21.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.359 --rc genhtml_branch_coverage=1 00:15:21.359 --rc genhtml_function_coverage=1 00:15:21.359 --rc genhtml_legend=1 00:15:21.359 --rc geninfo_all_blocks=1 00:15:21.359 --rc geninfo_unexecuted_blocks=1 00:15:21.359 00:15:21.359 ' 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:21.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.359 --rc genhtml_branch_coverage=1 00:15:21.359 --rc genhtml_function_coverage=1 00:15:21.359 --rc genhtml_legend=1 00:15:21.359 --rc geninfo_all_blocks=1 00:15:21.359 --rc geninfo_unexecuted_blocks=1 00:15:21.359 00:15:21.359 ' 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:21.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.359 --rc genhtml_branch_coverage=1 00:15:21.359 --rc genhtml_function_coverage=1 00:15:21.359 --rc genhtml_legend=1 00:15:21.359 --rc geninfo_all_blocks=1 00:15:21.359 --rc geninfo_unexecuted_blocks=1 00:15:21.359 00:15:21.359 ' 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:21.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.359 --rc genhtml_branch_coverage=1 00:15:21.359 --rc genhtml_function_coverage=1 00:15:21.359 --rc genhtml_legend=1 00:15:21.359 --rc geninfo_all_blocks=1 00:15:21.359 --rc geninfo_unexecuted_blocks=1 00:15:21.359 00:15:21.359 ' 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:15:21.359 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:21.360 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:21.361 #define SPDK_CONFIG_H 00:15:21.361 #define SPDK_CONFIG_AIO_FSDEV 1 00:15:21.361 #define SPDK_CONFIG_APPS 1 00:15:21.361 #define SPDK_CONFIG_ARCH native 00:15:21.361 #undef SPDK_CONFIG_ASAN 00:15:21.361 #undef SPDK_CONFIG_AVAHI 00:15:21.361 #undef SPDK_CONFIG_CET 00:15:21.361 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:15:21.361 #define SPDK_CONFIG_COVERAGE 1 00:15:21.361 #define SPDK_CONFIG_CROSS_PREFIX 00:15:21.361 #undef SPDK_CONFIG_CRYPTO 00:15:21.361 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:21.361 #undef SPDK_CONFIG_CUSTOMOCF 00:15:21.361 #undef SPDK_CONFIG_DAOS 00:15:21.361 #define SPDK_CONFIG_DAOS_DIR 00:15:21.361 #define SPDK_CONFIG_DEBUG 1 00:15:21.361 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:21.361 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:21.361 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:21.361 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:21.361 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:21.361 #undef SPDK_CONFIG_DPDK_UADK 00:15:21.361 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:21.361 #define SPDK_CONFIG_EXAMPLES 1 00:15:21.361 #undef SPDK_CONFIG_FC 00:15:21.361 #define SPDK_CONFIG_FC_PATH 00:15:21.361 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:21.361 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:21.361 #define SPDK_CONFIG_FSDEV 1 00:15:21.361 #undef SPDK_CONFIG_FUSE 00:15:21.361 #undef SPDK_CONFIG_FUZZER 00:15:21.361 #define SPDK_CONFIG_FUZZER_LIB 00:15:21.361 #undef SPDK_CONFIG_GOLANG 00:15:21.361 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:21.361 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:21.361 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:21.361 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:15:21.361 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:21.361 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:21.361 #undef SPDK_CONFIG_HAVE_LZ4 00:15:21.361 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:15:21.361 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:15:21.361 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:21.361 #define SPDK_CONFIG_IDXD 1 00:15:21.361 #define SPDK_CONFIG_IDXD_KERNEL 1 00:15:21.361 #undef SPDK_CONFIG_IPSEC_MB 00:15:21.361 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:21.361 #define SPDK_CONFIG_ISAL 1 00:15:21.361 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:21.361 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:21.361 #define SPDK_CONFIG_LIBDIR 00:15:21.361 #undef SPDK_CONFIG_LTO 00:15:21.361 #define SPDK_CONFIG_MAX_LCORES 128 00:15:21.361 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:15:21.361 #define SPDK_CONFIG_NVME_CUSE 1 00:15:21.361 #undef SPDK_CONFIG_OCF 00:15:21.361 #define SPDK_CONFIG_OCF_PATH 00:15:21.361 #define SPDK_CONFIG_OPENSSL_PATH 00:15:21.361 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:21.361 #define SPDK_CONFIG_PGO_DIR 00:15:21.361 #undef SPDK_CONFIG_PGO_USE 00:15:21.361 #define SPDK_CONFIG_PREFIX /usr/local 00:15:21.361 #undef SPDK_CONFIG_RAID5F 00:15:21.361 #undef SPDK_CONFIG_RBD 00:15:21.361 #define SPDK_CONFIG_RDMA 1 00:15:21.361 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:21.361 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:21.361 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:21.361 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:21.361 #define SPDK_CONFIG_SHARED 1 00:15:21.361 #undef SPDK_CONFIG_SMA 00:15:21.361 #define SPDK_CONFIG_TESTS 1 00:15:21.361 #undef SPDK_CONFIG_TSAN 00:15:21.361 #define SPDK_CONFIG_UBLK 1 00:15:21.361 #define SPDK_CONFIG_UBSAN 1 00:15:21.361 #undef SPDK_CONFIG_UNIT_TESTS 00:15:21.361 #undef SPDK_CONFIG_URING 00:15:21.361 #define SPDK_CONFIG_URING_PATH 00:15:21.361 #undef SPDK_CONFIG_URING_ZNS 00:15:21.361 #undef SPDK_CONFIG_USDT 00:15:21.361 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:21.361 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:21.361 #define SPDK_CONFIG_VFIO_USER 1 00:15:21.361 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:21.361 #define SPDK_CONFIG_VHOST 1 00:15:21.361 #define SPDK_CONFIG_VIRTIO 1 00:15:21.361 #undef SPDK_CONFIG_VTUNE 00:15:21.361 #define SPDK_CONFIG_VTUNE_DIR 00:15:21.361 #define SPDK_CONFIG_WERROR 1 00:15:21.361 #define SPDK_CONFIG_WPDK_DIR 00:15:21.361 #undef SPDK_CONFIG_XNVME 00:15:21.361 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:15:21.361 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:15:21.362 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:15:21.362 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:15:21.362 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:15:21.362 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:15:21.362 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:15:21.362 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:21.362 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:15:21.362 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:15:21.362 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:15:21.362 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:21.362 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:15:21.362 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:15:21.362 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:15:21.362 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:15:21.362 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:15:21.362 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:15:21.362 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:15:21.362 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:15:21.624 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:15:21.624 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:15:21.624 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:21.625 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1495941 ]] 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1495941 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:15:21.626 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.1j8UkY 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.1j8UkY/tests/target /tmp/spdk.1j8UkY 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189187629056 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963961344 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6776332288 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971949568 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981341696 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=638976 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:15:21.627 * Looking for test storage... 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189187629056 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8990924800 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:21.627 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:21.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.628 --rc genhtml_branch_coverage=1 00:15:21.628 --rc genhtml_function_coverage=1 00:15:21.628 --rc genhtml_legend=1 00:15:21.628 --rc geninfo_all_blocks=1 00:15:21.628 --rc geninfo_unexecuted_blocks=1 00:15:21.628 00:15:21.628 ' 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:21.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.628 --rc genhtml_branch_coverage=1 00:15:21.628 --rc genhtml_function_coverage=1 00:15:21.628 --rc genhtml_legend=1 00:15:21.628 --rc geninfo_all_blocks=1 00:15:21.628 --rc geninfo_unexecuted_blocks=1 00:15:21.628 00:15:21.628 ' 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:21.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.628 --rc genhtml_branch_coverage=1 00:15:21.628 --rc genhtml_function_coverage=1 00:15:21.628 --rc genhtml_legend=1 00:15:21.628 --rc geninfo_all_blocks=1 00:15:21.628 --rc geninfo_unexecuted_blocks=1 00:15:21.628 00:15:21.628 ' 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:21.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.628 --rc genhtml_branch_coverage=1 00:15:21.628 --rc genhtml_function_coverage=1 00:15:21.628 --rc genhtml_legend=1 00:15:21.628 --rc geninfo_all_blocks=1 00:15:21.628 --rc geninfo_unexecuted_blocks=1 00:15:21.628 00:15:21.628 ' 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:21.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:15:21.628 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:28.201 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:28.201 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:28.201 Found net devices under 0000:86:00.0: cvl_0_0 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:28.201 Found net devices under 0000:86:00.1: cvl_0_1 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:28.201 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:28.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:15:28.202 00:15:28.202 --- 10.0.0.2 ping statistics --- 00:15:28.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.202 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:28.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:15:28.202 00:15:28.202 --- 10.0.0.1 ping statistics --- 00:15:28.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.202 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:28.202 ************************************ 00:15:28.202 START TEST nvmf_filesystem_no_in_capsule 00:15:28.202 ************************************ 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1498982 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1498982 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1498982 ']' 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:28.202 [2024-11-20 14:34:39.602571] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:15:28.202 [2024-11-20 14:34:39.602608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.202 [2024-11-20 14:34:39.668074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:28.202 [2024-11-20 14:34:39.711058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.202 [2024-11-20 14:34:39.711093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.202 [2024-11-20 14:34:39.711100] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.202 [2024-11-20 14:34:39.711106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.202 [2024-11-20 14:34:39.711110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.202 [2024-11-20 14:34:39.715968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.202 [2024-11-20 14:34:39.716016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.202 [2024-11-20 14:34:39.716128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.202 [2024-11-20 14:34:39.716128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:28.202 [2024-11-20 14:34:39.854733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:28.202 Malloc1 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.202 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:28.202 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.202 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:28.202 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:28.203 [2024-11-20 14:34:40.017240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:28.203 { 00:15:28.203 "name": "Malloc1", 00:15:28.203 "aliases": [ 00:15:28.203 "6f433bcc-c638-442e-bf73-91f06b422b53" 00:15:28.203 ], 00:15:28.203 "product_name": "Malloc disk", 00:15:28.203 "block_size": 512, 00:15:28.203 "num_blocks": 1048576, 00:15:28.203 "uuid": "6f433bcc-c638-442e-bf73-91f06b422b53", 00:15:28.203 "assigned_rate_limits": { 00:15:28.203 "rw_ios_per_sec": 0, 00:15:28.203 "rw_mbytes_per_sec": 0, 00:15:28.203 "r_mbytes_per_sec": 0, 00:15:28.203 "w_mbytes_per_sec": 0 00:15:28.203 }, 00:15:28.203 "claimed": true, 00:15:28.203 "claim_type": "exclusive_write", 00:15:28.203 "zoned": false, 00:15:28.203 "supported_io_types": { 00:15:28.203 "read": true, 00:15:28.203 "write": true, 00:15:28.203 "unmap": true, 00:15:28.203 "flush": true, 00:15:28.203 "reset": true, 00:15:28.203 "nvme_admin": false, 00:15:28.203 "nvme_io": false, 00:15:28.203 "nvme_io_md": false, 00:15:28.203 "write_zeroes": true, 00:15:28.203 "zcopy": true, 00:15:28.203 "get_zone_info": false, 00:15:28.203 "zone_management": false, 00:15:28.203 "zone_append": false, 00:15:28.203 "compare": false, 00:15:28.203 "compare_and_write": false, 00:15:28.203 "abort": true, 00:15:28.203 "seek_hole": false, 00:15:28.203 "seek_data": false, 00:15:28.203 "copy": true, 00:15:28.203 "nvme_iov_md": false 00:15:28.203 }, 00:15:28.203 "memory_domains": [ 00:15:28.203 { 00:15:28.203 "dma_device_id": "system", 00:15:28.203 "dma_device_type": 1 00:15:28.203 }, 00:15:28.203 { 00:15:28.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.203 "dma_device_type": 2 00:15:28.203 } 00:15:28.203 ], 00:15:28.203 "driver_specific": {} 00:15:28.203 } 00:15:28.203 ]' 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:28.203 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:29.568 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:29.568 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:15:29.568 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:29.568 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:29.568 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:15:31.461 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:31.461 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:31.461 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:31.461 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:31.461 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:31.462 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:15:31.462 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:31.462 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:31.462 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:31.462 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:31.462 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:31.462 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:31.462 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:31.462 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:31.462 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:31.462 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:31.462 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:31.719 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:32.283 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:33.215 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:15:33.215 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:33.215 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:33.215 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.215 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:33.215 ************************************ 00:15:33.215 START TEST filesystem_ext4 00:15:33.215 ************************************ 00:15:33.215 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:33.215 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:33.215 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:33.215 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:33.215 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:15:33.215 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:33.215 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:15:33.215 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:15:33.215 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:15:33.215 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:15:33.215 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:33.215 mke2fs 1.47.0 (5-Feb-2023) 00:15:33.472 Discarding device blocks: 0/522240 done 00:15:33.472 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:33.472 Filesystem UUID: 63dbc21f-653c-41f3-ab79-cea06a77949a 00:15:33.472 Superblock backups stored on blocks: 00:15:33.472 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:33.472 00:15:33.472 Allocating group tables: 0/64 done 00:15:33.472 Writing inode tables: 0/64 done 00:15:33.729 Creating journal (8192 blocks): done 00:15:35.663 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:15:35.663 00:15:35.663 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:15:35.663 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1498982 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:42.214 00:15:42.214 real 0m8.343s 00:15:42.214 user 0m0.032s 00:15:42.214 sys 0m0.069s 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:42.214 ************************************ 00:15:42.214 END TEST filesystem_ext4 00:15:42.214 ************************************ 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:42.214 ************************************ 00:15:42.214 START TEST filesystem_btrfs 00:15:42.214 ************************************ 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:15:42.214 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:42.214 btrfs-progs v6.8.1 00:15:42.214 See https://btrfs.readthedocs.io for more information. 00:15:42.214 00:15:42.214 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:42.215 NOTE: several default settings have changed in version 5.15, please make sure 00:15:42.215 this does not affect your deployments: 00:15:42.215 - DUP for metadata (-m dup) 00:15:42.215 - enabled no-holes (-O no-holes) 00:15:42.215 - enabled free-space-tree (-R free-space-tree) 00:15:42.215 00:15:42.215 Label: (null) 00:15:42.215 UUID: c1652cf5-622e-47df-9090-1e3e18655a81 00:15:42.215 Node size: 16384 00:15:42.215 Sector size: 4096 (CPU page size: 4096) 00:15:42.215 Filesystem size: 510.00MiB 00:15:42.215 Block group profiles: 00:15:42.215 Data: single 8.00MiB 00:15:42.215 Metadata: DUP 32.00MiB 00:15:42.215 System: DUP 8.00MiB 00:15:42.215 SSD detected: yes 00:15:42.215 Zoned device: no 00:15:42.215 Features: extref, skinny-metadata, no-holes, free-space-tree 00:15:42.215 Checksum: crc32c 00:15:42.215 Number of devices: 1 00:15:42.215 Devices: 00:15:42.215 ID SIZE PATH 00:15:42.215 1 510.00MiB /dev/nvme0n1p1 00:15:42.215 00:15:42.215 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:15:42.215 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:42.472 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:42.472 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:15:42.472 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1498982 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:42.731 00:15:42.731 real 0m0.913s 00:15:42.731 user 0m0.020s 00:15:42.731 sys 0m0.117s 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:42.731 ************************************ 00:15:42.731 END TEST filesystem_btrfs 00:15:42.731 ************************************ 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:42.731 ************************************ 00:15:42.731 START TEST filesystem_xfs 00:15:42.731 ************************************ 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:15:42.731 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:42.731 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:42.731 = sectsz=512 attr=2, projid32bit=1 00:15:42.731 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:42.731 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:42.731 data = bsize=4096 blocks=130560, imaxpct=25 00:15:42.731 = sunit=0 swidth=0 blks 00:15:42.731 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:42.731 log =internal log bsize=4096 blocks=16384, version=2 00:15:42.731 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:42.731 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:43.664 Discarding blocks...Done. 00:15:43.664 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:15:43.664 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:45.608 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:45.608 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:15:45.608 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:45.608 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:15:45.608 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:15:45.608 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:45.608 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1498982 00:15:45.608 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:45.608 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:45.608 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:45.608 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:45.608 00:15:45.608 real 0m2.665s 00:15:45.608 user 0m0.018s 00:15:45.608 sys 0m0.080s 00:15:45.608 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.608 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:45.608 ************************************ 00:15:45.608 END TEST filesystem_xfs 00:15:45.608 ************************************ 00:15:45.608 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:45.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1498982 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1498982 ']' 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1498982 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1498982 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1498982' 00:15:45.916 killing process with pid 1498982 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1498982 00:15:45.916 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1498982 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:46.183 00:15:46.183 real 0m18.505s 00:15:46.183 user 1m12.865s 00:15:46.183 sys 0m1.448s 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:46.183 ************************************ 00:15:46.183 END TEST nvmf_filesystem_no_in_capsule 00:15:46.183 ************************************ 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:46.183 ************************************ 00:15:46.183 START TEST nvmf_filesystem_in_capsule 00:15:46.183 ************************************ 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1502251 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1502251 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1502251 ']' 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.183 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:46.441 [2024-11-20 14:34:58.179145] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:15:46.441 [2024-11-20 14:34:58.179191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.442 [2024-11-20 14:34:58.257899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.442 [2024-11-20 14:34:58.299066] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.442 [2024-11-20 14:34:58.299106] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.442 [2024-11-20 14:34:58.299116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.442 [2024-11-20 14:34:58.299122] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.442 [2024-11-20 14:34:58.299126] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.442 [2024-11-20 14:34:58.300609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.442 [2024-11-20 14:34:58.300714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.442 [2024-11-20 14:34:58.300830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.442 [2024-11-20 14:34:58.300831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:46.699 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:46.700 [2024-11-20 14:34:58.446641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:46.700 Malloc1 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:46.700 [2024-11-20 14:34:58.598134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:46.700 { 00:15:46.700 "name": "Malloc1", 00:15:46.700 "aliases": [ 00:15:46.700 "ba9433ae-3637-4fa1-85a0-3b4acf717956" 00:15:46.700 ], 00:15:46.700 "product_name": "Malloc disk", 00:15:46.700 "block_size": 512, 00:15:46.700 "num_blocks": 1048576, 00:15:46.700 "uuid": "ba9433ae-3637-4fa1-85a0-3b4acf717956", 00:15:46.700 "assigned_rate_limits": { 00:15:46.700 "rw_ios_per_sec": 0, 00:15:46.700 "rw_mbytes_per_sec": 0, 00:15:46.700 "r_mbytes_per_sec": 0, 00:15:46.700 "w_mbytes_per_sec": 0 00:15:46.700 }, 00:15:46.700 "claimed": true, 00:15:46.700 "claim_type": "exclusive_write", 00:15:46.700 "zoned": false, 00:15:46.700 "supported_io_types": { 00:15:46.700 "read": true, 00:15:46.700 "write": true, 00:15:46.700 "unmap": true, 00:15:46.700 "flush": true, 00:15:46.700 "reset": true, 00:15:46.700 "nvme_admin": false, 00:15:46.700 "nvme_io": false, 00:15:46.700 "nvme_io_md": false, 00:15:46.700 "write_zeroes": true, 00:15:46.700 "zcopy": true, 00:15:46.700 "get_zone_info": false, 00:15:46.700 "zone_management": false, 00:15:46.700 "zone_append": false, 00:15:46.700 "compare": false, 00:15:46.700 "compare_and_write": false, 00:15:46.700 "abort": true, 00:15:46.700 "seek_hole": false, 00:15:46.700 "seek_data": false, 00:15:46.700 "copy": true, 00:15:46.700 "nvme_iov_md": false 00:15:46.700 }, 00:15:46.700 "memory_domains": [ 00:15:46.700 { 00:15:46.700 "dma_device_id": "system", 00:15:46.700 "dma_device_type": 1 00:15:46.700 }, 00:15:46.700 { 00:15:46.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.700 "dma_device_type": 2 00:15:46.700 } 00:15:46.700 ], 00:15:46.700 "driver_specific": {} 00:15:46.700 } 00:15:46.700 ]' 00:15:46.700 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:46.958 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:15:46.958 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:46.958 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:15:46.958 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:15:46.958 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:15:46.958 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:46.958 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:47.891 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:47.891 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:15:47.891 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:47.891 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:47.891 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:15:50.415 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:50.415 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:50.415 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:50.415 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:50.415 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:50.415 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:15:50.415 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:50.415 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:50.415 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:50.415 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:50.415 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:50.415 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:50.415 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:50.415 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:50.415 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:50.415 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:50.415 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:50.415 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:50.673 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:51.608 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:15:51.608 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:51.608 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:51.608 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.608 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:51.608 ************************************ 00:15:51.608 START TEST filesystem_in_capsule_ext4 00:15:51.608 ************************************ 00:15:51.608 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:51.608 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:51.608 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:51.608 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:51.608 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:15:51.608 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:51.608 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:15:51.608 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:15:51.608 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:15:51.608 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:15:51.608 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:51.608 mke2fs 1.47.0 (5-Feb-2023) 00:15:51.866 Discarding device blocks: 0/522240 done 00:15:51.866 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:51.866 Filesystem UUID: 386feb9b-d034-486f-a215-4360660263f4 00:15:51.866 Superblock backups stored on blocks: 00:15:51.866 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:51.866 00:15:51.866 Allocating group tables: 0/64 done 00:15:51.866 Writing inode tables: 0/64 done 00:15:54.392 Creating journal (8192 blocks): done 00:15:54.392 Writing superblocks and filesystem accounting information: 0/64 done 00:15:54.392 00:15:54.392 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:15:54.392 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:00.944 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:00.944 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:16:00.944 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:00.944 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:16:00.944 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:00.944 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:00.944 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1502251 00:16:00.944 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:00.944 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:00.944 00:16:00.944 real 0m8.491s 00:16:00.944 user 0m0.027s 00:16:00.944 sys 0m0.072s 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:00.944 ************************************ 00:16:00.944 END TEST filesystem_in_capsule_ext4 00:16:00.944 ************************************ 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:00.944 ************************************ 00:16:00.944 START TEST filesystem_in_capsule_btrfs 00:16:00.944 ************************************ 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:00.944 btrfs-progs v6.8.1 00:16:00.944 See https://btrfs.readthedocs.io for more information. 00:16:00.944 00:16:00.944 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:00.944 NOTE: several default settings have changed in version 5.15, please make sure 00:16:00.944 this does not affect your deployments: 00:16:00.944 - DUP for metadata (-m dup) 00:16:00.944 - enabled no-holes (-O no-holes) 00:16:00.944 - enabled free-space-tree (-R free-space-tree) 00:16:00.944 00:16:00.944 Label: (null) 00:16:00.944 UUID: 105b5186-0e1a-44ad-996c-bc006a932458 00:16:00.944 Node size: 16384 00:16:00.944 Sector size: 4096 (CPU page size: 4096) 00:16:00.944 Filesystem size: 510.00MiB 00:16:00.944 Block group profiles: 00:16:00.944 Data: single 8.00MiB 00:16:00.944 Metadata: DUP 32.00MiB 00:16:00.944 System: DUP 8.00MiB 00:16:00.944 SSD detected: yes 00:16:00.944 Zoned device: no 00:16:00.944 Features: extref, skinny-metadata, no-holes, free-space-tree 00:16:00.944 Checksum: crc32c 00:16:00.944 Number of devices: 1 00:16:00.944 Devices: 00:16:00.944 ID SIZE PATH 00:16:00.944 1 510.00MiB /dev/nvme0n1p1 00:16:00.944 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:16:00.944 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1502251 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:01.510 00:16:01.510 real 0m1.166s 00:16:01.510 user 0m0.020s 00:16:01.510 sys 0m0.124s 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:01.510 ************************************ 00:16:01.510 END TEST filesystem_in_capsule_btrfs 00:16:01.510 ************************************ 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:01.510 ************************************ 00:16:01.510 START TEST filesystem_in_capsule_xfs 00:16:01.510 ************************************ 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:16:01.510 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:02.076 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:02.076 = sectsz=512 attr=2, projid32bit=1 00:16:02.076 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:02.076 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:02.076 data = bsize=4096 blocks=130560, imaxpct=25 00:16:02.076 = sunit=0 swidth=0 blks 00:16:02.076 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:02.076 log =internal log bsize=4096 blocks=16384, version=2 00:16:02.076 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:02.076 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:03.008 Discarding blocks...Done. 00:16:03.008 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:16:03.008 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:04.906 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:04.906 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:16:04.906 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:04.906 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:16:04.906 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:16:04.906 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:04.906 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1502251 00:16:04.906 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:04.906 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:04.906 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:04.906 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:04.906 00:16:04.906 real 0m3.406s 00:16:04.906 user 0m0.031s 00:16:04.906 sys 0m0.067s 00:16:04.906 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.906 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:04.906 ************************************ 00:16:04.906 END TEST filesystem_in_capsule_xfs 00:16:04.906 ************************************ 00:16:04.906 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:05.164 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:05.164 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:05.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1502251 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1502251 ']' 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1502251 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1502251 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1502251' 00:16:05.422 killing process with pid 1502251 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1502251 00:16:05.422 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1502251 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:05.989 00:16:05.989 real 0m19.540s 00:16:05.989 user 1m16.930s 00:16:05.989 sys 0m1.505s 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:05.989 ************************************ 00:16:05.989 END TEST nvmf_filesystem_in_capsule 00:16:05.989 ************************************ 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:05.989 rmmod nvme_tcp 00:16:05.989 rmmod nvme_fabrics 00:16:05.989 rmmod nvme_keyring 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.989 14:35:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.894 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:07.894 00:16:07.894 real 0m46.762s 00:16:07.894 user 2m31.851s 00:16:07.894 sys 0m7.654s 00:16:07.894 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.894 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:07.894 ************************************ 00:16:07.894 END TEST nvmf_filesystem 00:16:07.894 ************************************ 00:16:08.153 14:35:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:08.153 14:35:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:08.153 14:35:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.153 14:35:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:08.153 ************************************ 00:16:08.153 START TEST nvmf_target_discovery 00:16:08.153 ************************************ 00:16:08.153 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:08.153 * Looking for test storage... 00:16:08.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:08.153 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:08.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.154 --rc genhtml_branch_coverage=1 00:16:08.154 --rc genhtml_function_coverage=1 00:16:08.154 --rc genhtml_legend=1 00:16:08.154 --rc geninfo_all_blocks=1 00:16:08.154 --rc geninfo_unexecuted_blocks=1 00:16:08.154 00:16:08.154 ' 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:08.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.154 --rc genhtml_branch_coverage=1 00:16:08.154 --rc genhtml_function_coverage=1 00:16:08.154 --rc genhtml_legend=1 00:16:08.154 --rc geninfo_all_blocks=1 00:16:08.154 --rc geninfo_unexecuted_blocks=1 00:16:08.154 00:16:08.154 ' 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:08.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.154 --rc genhtml_branch_coverage=1 00:16:08.154 --rc genhtml_function_coverage=1 00:16:08.154 --rc genhtml_legend=1 00:16:08.154 --rc geninfo_all_blocks=1 00:16:08.154 --rc geninfo_unexecuted_blocks=1 00:16:08.154 00:16:08.154 ' 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:08.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.154 --rc genhtml_branch_coverage=1 00:16:08.154 --rc genhtml_function_coverage=1 00:16:08.154 --rc genhtml_legend=1 00:16:08.154 --rc geninfo_all_blocks=1 00:16:08.154 --rc geninfo_unexecuted_blocks=1 00:16:08.154 00:16:08.154 ' 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.154 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:08.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:16:08.412 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.986 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:14.986 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:16:14.986 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:14.986 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:14.986 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:14.986 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:14.986 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:14.986 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:16:14.986 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:14.986 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:16:14.986 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:16:14.986 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:16:14.986 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:16:14.986 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:16:14.986 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:16:14.986 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:14.987 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:14.987 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:14.987 Found net devices under 0000:86:00.0: cvl_0_0 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:14.987 Found net devices under 0000:86:00.1: cvl_0_1 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:14.987 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:14.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:16:14.988 00:16:14.988 --- 10.0.0.2 ping statistics --- 00:16:14.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.988 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:14.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:16:14.988 00:16:14.988 --- 10.0.0.1 ping statistics --- 00:16:14.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.988 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1509178 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1509178 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1509178 ']' 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.988 [2024-11-20 14:35:26.180475] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:16:14.988 [2024-11-20 14:35:26.180521] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.988 [2024-11-20 14:35:26.260994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:14.988 [2024-11-20 14:35:26.303250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.988 [2024-11-20 14:35:26.303286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.988 [2024-11-20 14:35:26.303293] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.988 [2024-11-20 14:35:26.303299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.988 [2024-11-20 14:35:26.303304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.988 [2024-11-20 14:35:26.304797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.988 [2024-11-20 14:35:26.304905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.988 [2024-11-20 14:35:26.305025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.988 [2024-11-20 14:35:26.305026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.988 [2024-11-20 14:35:26.447791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.988 Null1 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.988 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.989 [2024-11-20 14:35:26.506097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.989 Null2 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.989 Null3 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.989 Null4 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.989 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:16:14.989 00:16:14.989 Discovery Log Number of Records 6, Generation counter 6 00:16:14.989 =====Discovery Log Entry 0====== 00:16:14.989 trtype: tcp 00:16:14.989 adrfam: ipv4 00:16:14.989 subtype: current discovery subsystem 00:16:14.989 treq: not required 00:16:14.989 portid: 0 00:16:14.990 trsvcid: 4420 00:16:14.990 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:14.990 traddr: 10.0.0.2 00:16:14.990 eflags: explicit discovery connections, duplicate discovery information 00:16:14.990 sectype: none 00:16:14.990 =====Discovery Log Entry 1====== 00:16:14.990 trtype: tcp 00:16:14.990 adrfam: ipv4 00:16:14.990 subtype: nvme subsystem 00:16:14.990 treq: not required 00:16:14.990 portid: 0 00:16:14.990 trsvcid: 4420 00:16:14.990 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:14.990 traddr: 10.0.0.2 00:16:14.990 eflags: none 00:16:14.990 sectype: none 00:16:14.990 =====Discovery Log Entry 2====== 00:16:14.990 trtype: tcp 00:16:14.990 adrfam: ipv4 00:16:14.990 subtype: nvme subsystem 00:16:14.990 treq: not required 00:16:14.990 portid: 0 00:16:14.990 trsvcid: 4420 00:16:14.990 subnqn: nqn.2016-06.io.spdk:cnode2 00:16:14.990 traddr: 10.0.0.2 00:16:14.990 eflags: none 00:16:14.990 sectype: none 00:16:14.990 =====Discovery Log Entry 3====== 00:16:14.990 trtype: tcp 00:16:14.990 adrfam: ipv4 00:16:14.990 subtype: nvme subsystem 00:16:14.990 treq: not required 00:16:14.990 portid: 0 00:16:14.990 trsvcid: 4420 00:16:14.990 subnqn: nqn.2016-06.io.spdk:cnode3 00:16:14.990 traddr: 10.0.0.2 00:16:14.990 eflags: none 00:16:14.990 sectype: none 00:16:14.990 =====Discovery Log Entry 4====== 00:16:14.990 trtype: tcp 00:16:14.990 adrfam: ipv4 00:16:14.990 subtype: nvme subsystem 00:16:14.990 treq: not required 00:16:14.990 portid: 0 00:16:14.990 trsvcid: 4420 00:16:14.990 subnqn: nqn.2016-06.io.spdk:cnode4 00:16:14.990 traddr: 10.0.0.2 00:16:14.990 eflags: none 00:16:14.990 sectype: none 00:16:14.990 =====Discovery Log Entry 5====== 00:16:14.990 trtype: tcp 00:16:14.990 adrfam: ipv4 00:16:14.990 subtype: discovery subsystem referral 00:16:14.990 treq: not required 00:16:14.990 portid: 0 00:16:14.990 trsvcid: 4430 00:16:14.990 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:14.990 traddr: 10.0.0.2 00:16:14.990 eflags: none 00:16:14.990 sectype: none 00:16:14.990 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:16:14.990 Perform nvmf subsystem discovery via RPC 00:16:14.990 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:16:14.990 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.990 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.990 [ 00:16:14.990 { 00:16:14.990 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:14.990 "subtype": "Discovery", 00:16:14.990 "listen_addresses": [ 00:16:14.990 { 00:16:14.990 "trtype": "TCP", 00:16:14.990 "adrfam": "IPv4", 00:16:14.990 "traddr": "10.0.0.2", 00:16:14.990 "trsvcid": "4420" 00:16:14.990 } 00:16:14.990 ], 00:16:14.990 "allow_any_host": true, 00:16:14.990 "hosts": [] 00:16:14.990 }, 00:16:14.990 { 00:16:14.990 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:14.990 "subtype": "NVMe", 00:16:14.990 "listen_addresses": [ 00:16:14.990 { 00:16:14.990 "trtype": "TCP", 00:16:14.990 "adrfam": "IPv4", 00:16:14.990 "traddr": "10.0.0.2", 00:16:14.990 "trsvcid": "4420" 00:16:14.990 } 00:16:14.990 ], 00:16:14.990 "allow_any_host": true, 00:16:14.990 "hosts": [], 00:16:14.990 "serial_number": "SPDK00000000000001", 00:16:14.990 "model_number": "SPDK bdev Controller", 00:16:14.990 "max_namespaces": 32, 00:16:14.990 "min_cntlid": 1, 00:16:14.990 "max_cntlid": 65519, 00:16:14.990 "namespaces": [ 00:16:14.990 { 00:16:14.990 "nsid": 1, 00:16:14.990 "bdev_name": "Null1", 00:16:14.990 "name": "Null1", 00:16:14.990 "nguid": "D17268EE64ED4F9BB784B7E1723E8DA6", 00:16:14.990 "uuid": "d17268ee-64ed-4f9b-b784-b7e1723e8da6" 00:16:14.990 } 00:16:14.990 ] 00:16:14.990 }, 00:16:14.990 { 00:16:14.990 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:14.990 "subtype": "NVMe", 00:16:14.990 "listen_addresses": [ 00:16:14.990 { 00:16:14.990 "trtype": "TCP", 00:16:14.990 "adrfam": "IPv4", 00:16:14.990 "traddr": "10.0.0.2", 00:16:14.990 "trsvcid": "4420" 00:16:14.990 } 00:16:14.990 ], 00:16:14.990 "allow_any_host": true, 00:16:14.990 "hosts": [], 00:16:14.990 "serial_number": "SPDK00000000000002", 00:16:14.990 "model_number": "SPDK bdev Controller", 00:16:14.990 "max_namespaces": 32, 00:16:14.990 "min_cntlid": 1, 00:16:14.990 "max_cntlid": 65519, 00:16:14.990 "namespaces": [ 00:16:14.990 { 00:16:14.990 "nsid": 1, 00:16:14.990 "bdev_name": "Null2", 00:16:14.990 "name": "Null2", 00:16:14.990 "nguid": "4B7A0345DB3E4F33A26D2FB301C36C78", 00:16:14.990 "uuid": "4b7a0345-db3e-4f33-a26d-2fb301c36c78" 00:16:14.990 } 00:16:14.990 ] 00:16:14.990 }, 00:16:14.990 { 00:16:14.990 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:16:14.990 "subtype": "NVMe", 00:16:14.990 "listen_addresses": [ 00:16:14.990 { 00:16:14.990 "trtype": "TCP", 00:16:14.990 "adrfam": "IPv4", 00:16:14.990 "traddr": "10.0.0.2", 00:16:14.990 "trsvcid": "4420" 00:16:14.990 } 00:16:14.990 ], 00:16:14.990 "allow_any_host": true, 00:16:14.990 "hosts": [], 00:16:14.990 "serial_number": "SPDK00000000000003", 00:16:14.990 "model_number": "SPDK bdev Controller", 00:16:14.990 "max_namespaces": 32, 00:16:14.990 "min_cntlid": 1, 00:16:14.990 "max_cntlid": 65519, 00:16:14.990 "namespaces": [ 00:16:14.990 { 00:16:14.990 "nsid": 1, 00:16:14.990 "bdev_name": "Null3", 00:16:14.990 "name": "Null3", 00:16:14.991 "nguid": "8DCFFA5B234041E5A7EB5018F03D30C2", 00:16:14.991 "uuid": "8dcffa5b-2340-41e5-a7eb-5018f03d30c2" 00:16:14.991 } 00:16:14.991 ] 00:16:14.991 }, 00:16:14.991 { 00:16:14.991 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:16:14.991 "subtype": "NVMe", 00:16:14.991 "listen_addresses": [ 00:16:14.991 { 00:16:14.991 "trtype": "TCP", 00:16:14.991 "adrfam": "IPv4", 00:16:14.991 "traddr": "10.0.0.2", 00:16:14.991 "trsvcid": "4420" 00:16:14.991 } 00:16:14.991 ], 00:16:14.991 "allow_any_host": true, 00:16:14.991 "hosts": [], 00:16:14.991 "serial_number": "SPDK00000000000004", 00:16:14.991 "model_number": "SPDK bdev Controller", 00:16:14.991 "max_namespaces": 32, 00:16:14.991 "min_cntlid": 1, 00:16:14.991 "max_cntlid": 65519, 00:16:14.991 "namespaces": [ 00:16:14.991 { 00:16:14.991 "nsid": 1, 00:16:14.991 "bdev_name": "Null4", 00:16:14.991 "name": "Null4", 00:16:14.991 "nguid": "73A948DB435A4A2FA966F67026EC552E", 00:16:14.991 "uuid": "73a948db-435a-4a2f-a966-f67026ec552e" 00:16:14.991 } 00:16:14.991 ] 00:16:14.991 } 00:16:14.991 ] 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.991 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.249 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:16:15.249 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:16:15.249 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:16:15.249 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:16:15.249 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:15.249 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:16:15.250 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:15.250 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:16:15.250 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:15.250 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:15.250 rmmod nvme_tcp 00:16:15.250 rmmod nvme_fabrics 00:16:15.250 rmmod nvme_keyring 00:16:15.250 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:15.250 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:16:15.250 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:16:15.250 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1509178 ']' 00:16:15.250 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1509178 00:16:15.250 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1509178 ']' 00:16:15.250 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1509178 00:16:15.250 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:16:15.250 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:15.250 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1509178 00:16:15.250 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:15.250 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:15.250 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1509178' 00:16:15.250 killing process with pid 1509178 00:16:15.250 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1509178 00:16:15.250 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1509178 00:16:15.508 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:15.508 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:15.508 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:15.508 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:16:15.508 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:16:15.508 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:15.508 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:16:15.508 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:15.508 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:15.508 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.508 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:15.508 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.413 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:17.413 00:16:17.413 real 0m9.388s 00:16:17.413 user 0m5.458s 00:16:17.413 sys 0m4.969s 00:16:17.414 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.414 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:17.414 ************************************ 00:16:17.414 END TEST nvmf_target_discovery 00:16:17.414 ************************************ 00:16:17.414 14:35:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:17.414 14:35:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:17.414 14:35:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.414 14:35:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:17.414 ************************************ 00:16:17.414 START TEST nvmf_referrals 00:16:17.414 ************************************ 00:16:17.414 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:17.673 * Looking for test storage... 00:16:17.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:17.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.673 --rc genhtml_branch_coverage=1 00:16:17.673 --rc genhtml_function_coverage=1 00:16:17.673 --rc genhtml_legend=1 00:16:17.673 --rc geninfo_all_blocks=1 00:16:17.673 --rc geninfo_unexecuted_blocks=1 00:16:17.673 00:16:17.673 ' 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:17.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.673 --rc genhtml_branch_coverage=1 00:16:17.673 --rc genhtml_function_coverage=1 00:16:17.673 --rc genhtml_legend=1 00:16:17.673 --rc geninfo_all_blocks=1 00:16:17.673 --rc geninfo_unexecuted_blocks=1 00:16:17.673 00:16:17.673 ' 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:17.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.673 --rc genhtml_branch_coverage=1 00:16:17.673 --rc genhtml_function_coverage=1 00:16:17.673 --rc genhtml_legend=1 00:16:17.673 --rc geninfo_all_blocks=1 00:16:17.673 --rc geninfo_unexecuted_blocks=1 00:16:17.673 00:16:17.673 ' 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:17.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.673 --rc genhtml_branch_coverage=1 00:16:17.673 --rc genhtml_function_coverage=1 00:16:17.673 --rc genhtml_legend=1 00:16:17.673 --rc geninfo_all_blocks=1 00:16:17.673 --rc geninfo_unexecuted_blocks=1 00:16:17.673 00:16:17.673 ' 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.673 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:17.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:16:17.674 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:24.243 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.243 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:24.244 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:24.244 Found net devices under 0000:86:00.0: cvl_0_0 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:24.244 Found net devices under 0000:86:00.1: cvl_0_1 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:24.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:16:24.244 00:16:24.244 --- 10.0.0.2 ping statistics --- 00:16:24.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.244 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:24.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:16:24.244 00:16:24.244 --- 10.0.0.1 ping statistics --- 00:16:24.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.244 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1512962 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1512962 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1512962 ']' 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.244 [2024-11-20 14:35:35.659616] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:16:24.244 [2024-11-20 14:35:35.659666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.244 [2024-11-20 14:35:35.741079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:24.244 [2024-11-20 14:35:35.781641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.244 [2024-11-20 14:35:35.781682] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.244 [2024-11-20 14:35:35.781689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.244 [2024-11-20 14:35:35.781695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.244 [2024-11-20 14:35:35.781700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.244 [2024-11-20 14:35:35.783323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.244 [2024-11-20 14:35:35.783438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.244 [2024-11-20 14:35:35.783525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.244 [2024-11-20 14:35:35.783525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:16:24.244 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.245 [2024-11-20 14:35:35.934023] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.245 [2024-11-20 14:35:35.966129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.245 14:35:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.245 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.245 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:16:24.245 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:16:24.245 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:24.245 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:24.245 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:24.245 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.245 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:24.245 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.245 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.245 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:24.245 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:24.245 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:16:24.245 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:24.245 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:24.245 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:24.245 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:24.245 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:24.503 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:24.761 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:25.019 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:16:25.019 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:25.019 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:16:25.019 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:16:25.019 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:25.019 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:25.019 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:25.019 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:16:25.019 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:16:25.019 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:16:25.019 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:25.019 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:25.019 14:35:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:25.276 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:25.533 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:16:25.533 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:25.533 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:16:25.533 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:25.533 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:16:25.533 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:25.533 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:25.533 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:16:25.533 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:16:25.533 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:16:25.533 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:25.533 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:25.533 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:25.792 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:25.792 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:16:25.792 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.792 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.792 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.792 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:25.792 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:16:25.792 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.792 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.792 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.792 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:16:25.792 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:16:25.792 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:25.792 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:25.792 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:25.792 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:25.792 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:26.060 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:26.060 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:16:26.060 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:16:26.060 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:16:26.061 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:26.061 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:16:26.061 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:26.061 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:16:26.061 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:26.061 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:26.061 rmmod nvme_tcp 00:16:26.061 rmmod nvme_fabrics 00:16:26.061 rmmod nvme_keyring 00:16:26.061 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:26.061 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:16:26.061 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:16:26.061 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1512962 ']' 00:16:26.061 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1512962 00:16:26.061 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1512962 ']' 00:16:26.061 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1512962 00:16:26.061 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:16:26.061 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:26.061 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1512962 00:16:26.323 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:26.323 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:26.323 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1512962' 00:16:26.323 killing process with pid 1512962 00:16:26.323 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1512962 00:16:26.323 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1512962 00:16:26.323 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:26.323 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:26.323 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:26.323 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:16:26.323 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:16:26.323 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:26.323 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:16:26.323 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:26.323 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:26.323 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.324 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.324 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:28.860 00:16:28.860 real 0m10.913s 00:16:28.860 user 0m12.268s 00:16:28.860 sys 0m5.226s 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:28.860 ************************************ 00:16:28.860 END TEST nvmf_referrals 00:16:28.860 ************************************ 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:28.860 ************************************ 00:16:28.860 START TEST nvmf_connect_disconnect 00:16:28.860 ************************************ 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:28.860 * Looking for test storage... 00:16:28.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:16:28.860 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:28.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.861 --rc genhtml_branch_coverage=1 00:16:28.861 --rc genhtml_function_coverage=1 00:16:28.861 --rc genhtml_legend=1 00:16:28.861 --rc geninfo_all_blocks=1 00:16:28.861 --rc geninfo_unexecuted_blocks=1 00:16:28.861 00:16:28.861 ' 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:28.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.861 --rc genhtml_branch_coverage=1 00:16:28.861 --rc genhtml_function_coverage=1 00:16:28.861 --rc genhtml_legend=1 00:16:28.861 --rc geninfo_all_blocks=1 00:16:28.861 --rc geninfo_unexecuted_blocks=1 00:16:28.861 00:16:28.861 ' 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:28.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.861 --rc genhtml_branch_coverage=1 00:16:28.861 --rc genhtml_function_coverage=1 00:16:28.861 --rc genhtml_legend=1 00:16:28.861 --rc geninfo_all_blocks=1 00:16:28.861 --rc geninfo_unexecuted_blocks=1 00:16:28.861 00:16:28.861 ' 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:28.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.861 --rc genhtml_branch_coverage=1 00:16:28.861 --rc genhtml_function_coverage=1 00:16:28.861 --rc genhtml_legend=1 00:16:28.861 --rc geninfo_all_blocks=1 00:16:28.861 --rc geninfo_unexecuted_blocks=1 00:16:28.861 00:16:28.861 ' 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:28.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:16:28.861 14:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:35.498 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.498 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:35.499 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:35.499 Found net devices under 0000:86:00.0: cvl_0_0 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:35.499 Found net devices under 0000:86:00.1: cvl_0_1 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:35.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:16:35.499 00:16:35.499 --- 10.0.0.2 ping statistics --- 00:16:35.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.499 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:35.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:16:35.499 00:16:35.499 --- 10.0.0.1 ping statistics --- 00:16:35.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.499 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1516885 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1516885 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1516885 ']' 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:35.499 [2024-11-20 14:35:46.616567] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:16:35.499 [2024-11-20 14:35:46.616613] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.499 [2024-11-20 14:35:46.696116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:35.499 [2024-11-20 14:35:46.739688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.499 [2024-11-20 14:35:46.739723] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.499 [2024-11-20 14:35:46.739731] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.499 [2024-11-20 14:35:46.739738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.499 [2024-11-20 14:35:46.739746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.499 [2024-11-20 14:35:46.741164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.499 [2024-11-20 14:35:46.741242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:35.499 [2024-11-20 14:35:46.741242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.499 [2024-11-20 14:35:46.741208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.499 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:35.500 [2024-11-20 14:35:46.887275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:35.500 [2024-11-20 14:35:46.950311] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:16:35.500 14:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:16:38.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.257 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:51.257 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:51.258 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:51.258 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:51.258 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:51.258 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:51.258 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:51.258 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:51.258 rmmod nvme_tcp 00:16:51.258 rmmod nvme_fabrics 00:16:51.258 rmmod nvme_keyring 00:16:51.531 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:51.531 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:51.531 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:51.531 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1516885 ']' 00:16:51.531 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1516885 00:16:51.531 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1516885 ']' 00:16:51.531 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1516885 00:16:51.531 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:51.531 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.532 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1516885 00:16:51.532 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:51.532 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:51.532 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1516885' 00:16:51.532 killing process with pid 1516885 00:16:51.532 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1516885 00:16:51.532 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1516885 00:16:51.532 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:51.532 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:51.532 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:51.532 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:51.532 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:51.532 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:51.532 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:51.532 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:51.532 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:51.532 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.532 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:51.532 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:54.070 00:16:54.070 real 0m25.177s 00:16:54.070 user 1m8.087s 00:16:54.070 sys 0m5.877s 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:54.070 ************************************ 00:16:54.070 END TEST nvmf_connect_disconnect 00:16:54.070 ************************************ 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:54.070 ************************************ 00:16:54.070 START TEST nvmf_multitarget 00:16:54.070 ************************************ 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:54.070 * Looking for test storage... 00:16:54.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:54.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.070 --rc genhtml_branch_coverage=1 00:16:54.070 --rc genhtml_function_coverage=1 00:16:54.070 --rc genhtml_legend=1 00:16:54.070 --rc geninfo_all_blocks=1 00:16:54.070 --rc geninfo_unexecuted_blocks=1 00:16:54.070 00:16:54.070 ' 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:54.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.070 --rc genhtml_branch_coverage=1 00:16:54.070 --rc genhtml_function_coverage=1 00:16:54.070 --rc genhtml_legend=1 00:16:54.070 --rc geninfo_all_blocks=1 00:16:54.070 --rc geninfo_unexecuted_blocks=1 00:16:54.070 00:16:54.070 ' 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:54.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.070 --rc genhtml_branch_coverage=1 00:16:54.070 --rc genhtml_function_coverage=1 00:16:54.070 --rc genhtml_legend=1 00:16:54.070 --rc geninfo_all_blocks=1 00:16:54.070 --rc geninfo_unexecuted_blocks=1 00:16:54.070 00:16:54.070 ' 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:54.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.070 --rc genhtml_branch_coverage=1 00:16:54.070 --rc genhtml_function_coverage=1 00:16:54.070 --rc genhtml_legend=1 00:16:54.070 --rc geninfo_all_blocks=1 00:16:54.070 --rc geninfo_unexecuted_blocks=1 00:16:54.070 00:16:54.070 ' 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.070 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:54.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:54.071 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:00.637 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:00.637 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:00.637 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:00.638 Found net devices under 0000:86:00.0: cvl_0_0 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:00.638 Found net devices under 0000:86:00.1: cvl_0_1 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:00.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:00.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:17:00.638 00:17:00.638 --- 10.0.0.2 ping statistics --- 00:17:00.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.638 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:00.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:00.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:17:00.638 00:17:00.638 --- 10.0.0.1 ping statistics --- 00:17:00.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.638 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1523734 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1523734 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1523734 ']' 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.638 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:00.638 [2024-11-20 14:36:11.884345] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:17:00.638 [2024-11-20 14:36:11.884396] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.638 [2024-11-20 14:36:11.946171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:00.638 [2024-11-20 14:36:11.990667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.638 [2024-11-20 14:36:11.990700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.638 [2024-11-20 14:36:11.990709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:00.638 [2024-11-20 14:36:11.990716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:00.638 [2024-11-20 14:36:11.990721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.638 [2024-11-20 14:36:11.992331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.638 [2024-11-20 14:36:11.992452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.638 [2024-11-20 14:36:11.992557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.638 [2024-11-20 14:36:11.992558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:00.638 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.638 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:17:00.638 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:00.638 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:00.638 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:00.638 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.638 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:00.638 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:00.638 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:00.638 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:00.639 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:00.639 "nvmf_tgt_1" 00:17:00.639 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:00.639 "nvmf_tgt_2" 00:17:00.639 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:00.639 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:00.639 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:00.639 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:00.896 true 00:17:00.896 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:00.896 true 00:17:00.896 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:00.896 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:01.155 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:01.155 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:01.155 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:01.155 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:01.155 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:01.155 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:01.155 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:01.155 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:01.155 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:01.155 rmmod nvme_tcp 00:17:01.155 rmmod nvme_fabrics 00:17:01.155 rmmod nvme_keyring 00:17:01.155 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:01.155 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:01.155 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:01.155 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1523734 ']' 00:17:01.155 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1523734 00:17:01.155 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1523734 ']' 00:17:01.155 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1523734 00:17:01.155 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:01.155 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.155 14:36:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1523734 00:17:01.155 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:01.155 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:01.155 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1523734' 00:17:01.155 killing process with pid 1523734 00:17:01.155 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1523734 00:17:01.155 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1523734 00:17:01.415 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:01.415 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:01.415 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:01.415 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:01.415 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:01.415 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:01.415 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:01.415 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:01.415 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:01.415 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.415 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.415 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.322 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:03.322 00:17:03.322 real 0m9.668s 00:17:03.322 user 0m7.357s 00:17:03.322 sys 0m4.950s 00:17:03.322 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.322 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:03.322 ************************************ 00:17:03.322 END TEST nvmf_multitarget 00:17:03.322 ************************************ 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:03.581 ************************************ 00:17:03.581 START TEST nvmf_rpc 00:17:03.581 ************************************ 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:03.581 * Looking for test storage... 00:17:03.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:03.581 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:03.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.582 --rc genhtml_branch_coverage=1 00:17:03.582 --rc genhtml_function_coverage=1 00:17:03.582 --rc genhtml_legend=1 00:17:03.582 --rc geninfo_all_blocks=1 00:17:03.582 --rc geninfo_unexecuted_blocks=1 00:17:03.582 00:17:03.582 ' 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:03.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.582 --rc genhtml_branch_coverage=1 00:17:03.582 --rc genhtml_function_coverage=1 00:17:03.582 --rc genhtml_legend=1 00:17:03.582 --rc geninfo_all_blocks=1 00:17:03.582 --rc geninfo_unexecuted_blocks=1 00:17:03.582 00:17:03.582 ' 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:03.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.582 --rc genhtml_branch_coverage=1 00:17:03.582 --rc genhtml_function_coverage=1 00:17:03.582 --rc genhtml_legend=1 00:17:03.582 --rc geninfo_all_blocks=1 00:17:03.582 --rc geninfo_unexecuted_blocks=1 00:17:03.582 00:17:03.582 ' 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:03.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.582 --rc genhtml_branch_coverage=1 00:17:03.582 --rc genhtml_function_coverage=1 00:17:03.582 --rc genhtml_legend=1 00:17:03.582 --rc geninfo_all_blocks=1 00:17:03.582 --rc geninfo_unexecuted_blocks=1 00:17:03.582 00:17:03.582 ' 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.582 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.840 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.840 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.840 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.840 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.840 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.840 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.840 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.840 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:03.840 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.840 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.840 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.840 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:03.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:03.841 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:10.414 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:10.414 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:10.414 Found net devices under 0000:86:00.0: cvl_0_0 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:10.414 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:10.415 Found net devices under 0000:86:00.1: cvl_0_1 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:10.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:17:10.415 00:17:10.415 --- 10.0.0.2 ping statistics --- 00:17:10.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.415 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:17:10.415 00:17:10.415 --- 10.0.0.1 ping statistics --- 00:17:10.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.415 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1527518 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1527518 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1527518 ']' 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.415 [2024-11-20 14:36:21.582120] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:17:10.415 [2024-11-20 14:36:21.582169] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.415 [2024-11-20 14:36:21.664816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:10.415 [2024-11-20 14:36:21.708653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.415 [2024-11-20 14:36:21.708692] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.415 [2024-11-20 14:36:21.708701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.415 [2024-11-20 14:36:21.708708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.415 [2024-11-20 14:36:21.708713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.415 [2024-11-20 14:36:21.710271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.415 [2024-11-20 14:36:21.710382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.415 [2024-11-20 14:36:21.710491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.415 [2024-11-20 14:36:21.710492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:10.415 "tick_rate": 2300000000, 00:17:10.415 "poll_groups": [ 00:17:10.415 { 00:17:10.415 "name": "nvmf_tgt_poll_group_000", 00:17:10.415 "admin_qpairs": 0, 00:17:10.415 "io_qpairs": 0, 00:17:10.415 "current_admin_qpairs": 0, 00:17:10.415 "current_io_qpairs": 0, 00:17:10.415 "pending_bdev_io": 0, 00:17:10.415 "completed_nvme_io": 0, 00:17:10.415 "transports": [] 00:17:10.415 }, 00:17:10.415 { 00:17:10.415 "name": "nvmf_tgt_poll_group_001", 00:17:10.415 "admin_qpairs": 0, 00:17:10.415 "io_qpairs": 0, 00:17:10.415 "current_admin_qpairs": 0, 00:17:10.415 "current_io_qpairs": 0, 00:17:10.415 "pending_bdev_io": 0, 00:17:10.415 "completed_nvme_io": 0, 00:17:10.415 "transports": [] 00:17:10.415 }, 00:17:10.415 { 00:17:10.415 "name": "nvmf_tgt_poll_group_002", 00:17:10.415 "admin_qpairs": 0, 00:17:10.415 "io_qpairs": 0, 00:17:10.415 "current_admin_qpairs": 0, 00:17:10.415 "current_io_qpairs": 0, 00:17:10.415 "pending_bdev_io": 0, 00:17:10.415 "completed_nvme_io": 0, 00:17:10.415 "transports": [] 00:17:10.415 }, 00:17:10.415 { 00:17:10.415 "name": "nvmf_tgt_poll_group_003", 00:17:10.415 "admin_qpairs": 0, 00:17:10.415 "io_qpairs": 0, 00:17:10.415 "current_admin_qpairs": 0, 00:17:10.415 "current_io_qpairs": 0, 00:17:10.415 "pending_bdev_io": 0, 00:17:10.415 "completed_nvme_io": 0, 00:17:10.415 "transports": [] 00:17:10.415 } 00:17:10.415 ] 00:17:10.415 }' 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:10.415 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:10.416 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:10.416 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:10.416 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:10.416 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:10.416 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:10.416 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:10.416 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.416 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.416 [2024-11-20 14:36:21.961754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.416 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.416 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:10.416 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.416 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.416 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.416 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:10.416 "tick_rate": 2300000000, 00:17:10.416 "poll_groups": [ 00:17:10.416 { 00:17:10.416 "name": "nvmf_tgt_poll_group_000", 00:17:10.416 "admin_qpairs": 0, 00:17:10.416 "io_qpairs": 0, 00:17:10.416 "current_admin_qpairs": 0, 00:17:10.416 "current_io_qpairs": 0, 00:17:10.416 "pending_bdev_io": 0, 00:17:10.416 "completed_nvme_io": 0, 00:17:10.416 "transports": [ 00:17:10.416 { 00:17:10.416 "trtype": "TCP" 00:17:10.416 } 00:17:10.416 ] 00:17:10.416 }, 00:17:10.416 { 00:17:10.416 "name": "nvmf_tgt_poll_group_001", 00:17:10.416 "admin_qpairs": 0, 00:17:10.416 "io_qpairs": 0, 00:17:10.416 "current_admin_qpairs": 0, 00:17:10.416 "current_io_qpairs": 0, 00:17:10.416 "pending_bdev_io": 0, 00:17:10.416 "completed_nvme_io": 0, 00:17:10.416 "transports": [ 00:17:10.416 { 00:17:10.416 "trtype": "TCP" 00:17:10.416 } 00:17:10.416 ] 00:17:10.416 }, 00:17:10.416 { 00:17:10.416 "name": "nvmf_tgt_poll_group_002", 00:17:10.416 "admin_qpairs": 0, 00:17:10.416 "io_qpairs": 0, 00:17:10.416 "current_admin_qpairs": 0, 00:17:10.416 "current_io_qpairs": 0, 00:17:10.416 "pending_bdev_io": 0, 00:17:10.416 "completed_nvme_io": 0, 00:17:10.416 "transports": [ 00:17:10.416 { 00:17:10.416 "trtype": "TCP" 00:17:10.416 } 00:17:10.416 ] 00:17:10.416 }, 00:17:10.416 { 00:17:10.416 "name": "nvmf_tgt_poll_group_003", 00:17:10.416 "admin_qpairs": 0, 00:17:10.416 "io_qpairs": 0, 00:17:10.416 "current_admin_qpairs": 0, 00:17:10.416 "current_io_qpairs": 0, 00:17:10.416 "pending_bdev_io": 0, 00:17:10.416 "completed_nvme_io": 0, 00:17:10.416 "transports": [ 00:17:10.416 { 00:17:10.416 "trtype": "TCP" 00:17:10.416 } 00:17:10.416 ] 00:17:10.416 } 00:17:10.416 ] 00:17:10.416 }' 00:17:10.416 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:10.416 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:10.416 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:10.416 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.416 Malloc1 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.416 [2024-11-20 14:36:22.147408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:10.416 [2024-11-20 14:36:22.182172] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:17:10.416 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:10.416 could not add new controller: failed to write to nvme-fabrics device 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.416 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:11.350 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:11.350 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:11.350 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:11.350 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:11.350 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:13.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:13.878 [2024-11-20 14:36:25.488050] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:17:13.878 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:13.878 could not add new controller: failed to write to nvme-fabrics device 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.878 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:14.812 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:14.812 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:14.812 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:14.812 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:14.812 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:16.713 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:16.713 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:16.713 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:16.713 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:16.713 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:16.713 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:16.713 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:16.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.971 [2024-11-20 14:36:28.806303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.971 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:18.344 14:36:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:18.344 14:36:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:18.344 14:36:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:18.344 14:36:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:18.344 14:36:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:20.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.247 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.506 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.506 [2024-11-20 14:36:32.211279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.506 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.506 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:20.506 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.506 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.506 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.506 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:20.506 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.506 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.506 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.506 14:36:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:21.879 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:21.879 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:21.879 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:21.879 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:21.879 14:36:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:23.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.779 [2024-11-20 14:36:35.554961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.779 14:36:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:25.154 14:36:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:25.154 14:36:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:25.154 14:36:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:25.154 14:36:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:25.154 14:36:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:27.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.052 [2024-11-20 14:36:38.861760] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.052 14:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:28.425 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:28.425 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:28.425 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:28.425 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:28.425 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:30.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.322 [2024-11-20 14:36:42.228408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.322 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:31.694 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:31.694 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:31.694 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:31.694 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:31.694 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:33.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.594 [2024-11-20 14:36:45.509855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.594 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.854 [2024-11-20 14:36:45.557945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.854 [2024-11-20 14:36:45.606097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:33.854 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.855 [2024-11-20 14:36:45.654258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.855 [2024-11-20 14:36:45.702412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:33.855 "tick_rate": 2300000000, 00:17:33.855 "poll_groups": [ 00:17:33.855 { 00:17:33.855 "name": "nvmf_tgt_poll_group_000", 00:17:33.855 "admin_qpairs": 2, 00:17:33.855 "io_qpairs": 168, 00:17:33.855 "current_admin_qpairs": 0, 00:17:33.855 "current_io_qpairs": 0, 00:17:33.855 "pending_bdev_io": 0, 00:17:33.855 "completed_nvme_io": 256, 00:17:33.855 "transports": [ 00:17:33.855 { 00:17:33.855 "trtype": "TCP" 00:17:33.855 } 00:17:33.855 ] 00:17:33.855 }, 00:17:33.855 { 00:17:33.855 "name": "nvmf_tgt_poll_group_001", 00:17:33.855 "admin_qpairs": 2, 00:17:33.855 "io_qpairs": 168, 00:17:33.855 "current_admin_qpairs": 0, 00:17:33.855 "current_io_qpairs": 0, 00:17:33.855 "pending_bdev_io": 0, 00:17:33.855 "completed_nvme_io": 219, 00:17:33.855 "transports": [ 00:17:33.855 { 00:17:33.855 "trtype": "TCP" 00:17:33.855 } 00:17:33.855 ] 00:17:33.855 }, 00:17:33.855 { 00:17:33.855 "name": "nvmf_tgt_poll_group_002", 00:17:33.855 "admin_qpairs": 1, 00:17:33.855 "io_qpairs": 168, 00:17:33.855 "current_admin_qpairs": 0, 00:17:33.855 "current_io_qpairs": 0, 00:17:33.855 "pending_bdev_io": 0, 00:17:33.855 "completed_nvme_io": 266, 00:17:33.855 "transports": [ 00:17:33.855 { 00:17:33.855 "trtype": "TCP" 00:17:33.855 } 00:17:33.855 ] 00:17:33.855 }, 00:17:33.855 { 00:17:33.855 "name": "nvmf_tgt_poll_group_003", 00:17:33.855 "admin_qpairs": 2, 00:17:33.855 "io_qpairs": 168, 00:17:33.855 "current_admin_qpairs": 0, 00:17:33.855 "current_io_qpairs": 0, 00:17:33.855 "pending_bdev_io": 0, 00:17:33.855 "completed_nvme_io": 281, 00:17:33.855 "transports": [ 00:17:33.855 { 00:17:33.855 "trtype": "TCP" 00:17:33.855 } 00:17:33.855 ] 00:17:33.855 } 00:17:33.855 ] 00:17:33.855 }' 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:33.855 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:34.113 rmmod nvme_tcp 00:17:34.113 rmmod nvme_fabrics 00:17:34.113 rmmod nvme_keyring 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1527518 ']' 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1527518 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1527518 ']' 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1527518 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1527518 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1527518' 00:17:34.113 killing process with pid 1527518 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1527518 00:17:34.113 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1527518 00:17:34.372 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:34.372 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:34.372 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:34.372 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:34.372 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:34.372 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:34.372 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:34.372 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:34.372 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:34.372 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.372 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.372 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.278 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:36.278 00:17:36.278 real 0m32.879s 00:17:36.278 user 1m39.096s 00:17:36.278 sys 0m6.533s 00:17:36.278 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.278 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.278 ************************************ 00:17:36.278 END TEST nvmf_rpc 00:17:36.278 ************************************ 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:36.538 ************************************ 00:17:36.538 START TEST nvmf_invalid 00:17:36.538 ************************************ 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:36.538 * Looking for test storage... 00:17:36.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:36.538 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:36.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.539 --rc genhtml_branch_coverage=1 00:17:36.539 --rc genhtml_function_coverage=1 00:17:36.539 --rc genhtml_legend=1 00:17:36.539 --rc geninfo_all_blocks=1 00:17:36.539 --rc geninfo_unexecuted_blocks=1 00:17:36.539 00:17:36.539 ' 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:36.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.539 --rc genhtml_branch_coverage=1 00:17:36.539 --rc genhtml_function_coverage=1 00:17:36.539 --rc genhtml_legend=1 00:17:36.539 --rc geninfo_all_blocks=1 00:17:36.539 --rc geninfo_unexecuted_blocks=1 00:17:36.539 00:17:36.539 ' 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:36.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.539 --rc genhtml_branch_coverage=1 00:17:36.539 --rc genhtml_function_coverage=1 00:17:36.539 --rc genhtml_legend=1 00:17:36.539 --rc geninfo_all_blocks=1 00:17:36.539 --rc geninfo_unexecuted_blocks=1 00:17:36.539 00:17:36.539 ' 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:36.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.539 --rc genhtml_branch_coverage=1 00:17:36.539 --rc genhtml_function_coverage=1 00:17:36.539 --rc genhtml_legend=1 00:17:36.539 --rc geninfo_all_blocks=1 00:17:36.539 --rc geninfo_unexecuted_blocks=1 00:17:36.539 00:17:36.539 ' 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.539 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:36.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:36.799 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:43.368 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:43.368 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:43.368 Found net devices under 0000:86:00.0: cvl_0_0 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:43.368 Found net devices under 0000:86:00.1: cvl_0_1 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.368 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:43.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:17:43.369 00:17:43.369 --- 10.0.0.2 ping statistics --- 00:17:43.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.369 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:17:43.369 00:17:43.369 --- 10.0.0.1 ping statistics --- 00:17:43.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.369 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1535127 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1535127 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1535127 ']' 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:43.369 [2024-11-20 14:36:54.533894] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:17:43.369 [2024-11-20 14:36:54.533945] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.369 [2024-11-20 14:36:54.613529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:43.369 [2024-11-20 14:36:54.656198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.369 [2024-11-20 14:36:54.656237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.369 [2024-11-20 14:36:54.656244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.369 [2024-11-20 14:36:54.656250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.369 [2024-11-20 14:36:54.656255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.369 [2024-11-20 14:36:54.657743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.369 [2024-11-20 14:36:54.657856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.369 [2024-11-20 14:36:54.657982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.369 [2024-11-20 14:36:54.657983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:43.369 14:36:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24793 00:17:43.369 [2024-11-20 14:36:54.977285] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:43.369 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:43.369 { 00:17:43.369 "nqn": "nqn.2016-06.io.spdk:cnode24793", 00:17:43.369 "tgt_name": "foobar", 00:17:43.369 "method": "nvmf_create_subsystem", 00:17:43.369 "req_id": 1 00:17:43.369 } 00:17:43.369 Got JSON-RPC error response 00:17:43.369 response: 00:17:43.369 { 00:17:43.369 "code": -32603, 00:17:43.369 "message": "Unable to find target foobar" 00:17:43.369 }' 00:17:43.369 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:43.369 { 00:17:43.369 "nqn": "nqn.2016-06.io.spdk:cnode24793", 00:17:43.369 "tgt_name": "foobar", 00:17:43.369 "method": "nvmf_create_subsystem", 00:17:43.369 "req_id": 1 00:17:43.369 } 00:17:43.369 Got JSON-RPC error response 00:17:43.369 response: 00:17:43.369 { 00:17:43.369 "code": -32603, 00:17:43.369 "message": "Unable to find target foobar" 00:17:43.369 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:43.369 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:43.369 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19411 00:17:43.369 [2024-11-20 14:36:55.177966] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19411: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:43.369 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:43.369 { 00:17:43.369 "nqn": "nqn.2016-06.io.spdk:cnode19411", 00:17:43.369 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:43.369 "method": "nvmf_create_subsystem", 00:17:43.369 "req_id": 1 00:17:43.369 } 00:17:43.369 Got JSON-RPC error response 00:17:43.369 response: 00:17:43.369 { 00:17:43.369 "code": -32602, 00:17:43.369 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:43.369 }' 00:17:43.369 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:43.369 { 00:17:43.369 "nqn": "nqn.2016-06.io.spdk:cnode19411", 00:17:43.369 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:43.369 "method": "nvmf_create_subsystem", 00:17:43.369 "req_id": 1 00:17:43.369 } 00:17:43.369 Got JSON-RPC error response 00:17:43.369 response: 00:17:43.369 { 00:17:43.369 "code": -32602, 00:17:43.369 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:43.369 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:43.369 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:43.369 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24260 00:17:43.628 [2024-11-20 14:36:55.378619] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24260: invalid model number 'SPDK_Controller' 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:43.628 { 00:17:43.628 "nqn": "nqn.2016-06.io.spdk:cnode24260", 00:17:43.628 "model_number": "SPDK_Controller\u001f", 00:17:43.628 "method": "nvmf_create_subsystem", 00:17:43.628 "req_id": 1 00:17:43.628 } 00:17:43.628 Got JSON-RPC error response 00:17:43.628 response: 00:17:43.628 { 00:17:43.628 "code": -32602, 00:17:43.628 "message": "Invalid MN SPDK_Controller\u001f" 00:17:43.628 }' 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:43.628 { 00:17:43.628 "nqn": "nqn.2016-06.io.spdk:cnode24260", 00:17:43.628 "model_number": "SPDK_Controller\u001f", 00:17:43.628 "method": "nvmf_create_subsystem", 00:17:43.628 "req_id": 1 00:17:43.628 } 00:17:43.628 Got JSON-RPC error response 00:17:43.628 response: 00:17:43.628 { 00:17:43.628 "code": -32602, 00:17:43.628 "message": "Invalid MN SPDK_Controller\u001f" 00:17:43.628 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:43.628 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ c == \- ]] 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'cI":=p*gCL2N{14M{\x*H' 00:17:43.629 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'cI":=p*gCL2N{14M{\x*H' nqn.2016-06.io.spdk:cnode32593 00:17:43.887 [2024-11-20 14:36:55.723821] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32593: invalid serial number 'cI":=p*gCL2N{14M{\x*H' 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:43.887 { 00:17:43.887 "nqn": "nqn.2016-06.io.spdk:cnode32593", 00:17:43.887 "serial_number": "cI\":=p*gCL2N{14M{\\x*H", 00:17:43.887 "method": "nvmf_create_subsystem", 00:17:43.887 "req_id": 1 00:17:43.887 } 00:17:43.887 Got JSON-RPC error response 00:17:43.887 response: 00:17:43.887 { 00:17:43.887 "code": -32602, 00:17:43.887 "message": "Invalid SN cI\":=p*gCL2N{14M{\\x*H" 00:17:43.887 }' 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:43.887 { 00:17:43.887 "nqn": "nqn.2016-06.io.spdk:cnode32593", 00:17:43.887 "serial_number": "cI\":=p*gCL2N{14M{\\x*H", 00:17:43.887 "method": "nvmf_create_subsystem", 00:17:43.887 "req_id": 1 00:17:43.887 } 00:17:43.887 Got JSON-RPC error response 00:17:43.887 response: 00:17:43.887 { 00:17:43.887 "code": -32602, 00:17:43.887 "message": "Invalid SN cI\":=p*gCL2N{14M{\\x*H" 00:17:43.887 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:43.887 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:43.888 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.146 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:44.147 14:36:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ C == \- ]] 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'CWPN;)qqeB>tG@iLLM:.NT'\''Gz!fd:;85%<)_Yj/J' 00:17:44.147 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'CWPN;)qqeB>tG@iLLM:.NT'\''Gz!fd:;85%<)_Yj/J' nqn.2016-06.io.spdk:cnode27415 00:17:44.405 [2024-11-20 14:36:56.205424] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27415: invalid model number 'CWPN;)qqeB>tG@iLLM:.NT'Gz!fd:;85%<)_Yj/J' 00:17:44.405 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:44.405 { 00:17:44.405 "nqn": "nqn.2016-06.io.spdk:cnode27415", 00:17:44.405 "model_number": "CWPN;)qqeB>tG@iLLM:.N\u007fT'\''Gz!fd:;85%<)_Yj/J", 00:17:44.405 "method": "nvmf_create_subsystem", 00:17:44.405 "req_id": 1 00:17:44.405 } 00:17:44.405 Got JSON-RPC error response 00:17:44.405 response: 00:17:44.405 { 00:17:44.405 "code": -32602, 00:17:44.405 "message": "Invalid MN CWPN;)qqeB>tG@iLLM:.N\u007fT'\''Gz!fd:;85%<)_Yj/J" 00:17:44.405 }' 00:17:44.405 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:44.405 { 00:17:44.405 "nqn": "nqn.2016-06.io.spdk:cnode27415", 00:17:44.405 "model_number": "CWPN;)qqeB>tG@iLLM:.N\u007fT'Gz!fd:;85%<)_Yj/J", 00:17:44.405 "method": "nvmf_create_subsystem", 00:17:44.405 "req_id": 1 00:17:44.405 } 00:17:44.405 Got JSON-RPC error response 00:17:44.405 response: 00:17:44.405 { 00:17:44.405 "code": -32602, 00:17:44.405 "message": "Invalid MN CWPN;)qqeB>tG@iLLM:.N\u007fT'Gz!fd:;85%<)_Yj/J" 00:17:44.405 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:44.405 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:44.662 [2024-11-20 14:36:56.410163] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.662 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:44.920 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:44.920 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:44.920 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:44.920 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:44.920 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:44.920 [2024-11-20 14:36:56.831583] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:44.920 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:44.920 { 00:17:44.920 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:44.920 "listen_address": { 00:17:44.920 "trtype": "tcp", 00:17:44.920 "traddr": "", 00:17:44.920 "trsvcid": "4421" 00:17:44.920 }, 00:17:44.920 "method": "nvmf_subsystem_remove_listener", 00:17:44.920 "req_id": 1 00:17:44.920 } 00:17:44.920 Got JSON-RPC error response 00:17:44.920 response: 00:17:44.920 { 00:17:44.920 "code": -32602, 00:17:44.920 "message": "Invalid parameters" 00:17:44.920 }' 00:17:44.920 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:44.920 { 00:17:44.920 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:44.920 "listen_address": { 00:17:44.920 "trtype": "tcp", 00:17:44.920 "traddr": "", 00:17:44.920 "trsvcid": "4421" 00:17:44.920 }, 00:17:44.920 "method": "nvmf_subsystem_remove_listener", 00:17:44.920 "req_id": 1 00:17:44.920 } 00:17:44.920 Got JSON-RPC error response 00:17:44.920 response: 00:17:44.920 { 00:17:44.920 "code": -32602, 00:17:44.920 "message": "Invalid parameters" 00:17:44.920 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:44.920 14:36:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30727 -i 0 00:17:45.178 [2024-11-20 14:36:57.032219] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30727: invalid cntlid range [0-65519] 00:17:45.178 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:45.178 { 00:17:45.178 "nqn": "nqn.2016-06.io.spdk:cnode30727", 00:17:45.178 "min_cntlid": 0, 00:17:45.178 "method": "nvmf_create_subsystem", 00:17:45.178 "req_id": 1 00:17:45.178 } 00:17:45.178 Got JSON-RPC error response 00:17:45.178 response: 00:17:45.178 { 00:17:45.178 "code": -32602, 00:17:45.178 "message": "Invalid cntlid range [0-65519]" 00:17:45.178 }' 00:17:45.178 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:45.178 { 00:17:45.178 "nqn": "nqn.2016-06.io.spdk:cnode30727", 00:17:45.178 "min_cntlid": 0, 00:17:45.178 "method": "nvmf_create_subsystem", 00:17:45.178 "req_id": 1 00:17:45.178 } 00:17:45.178 Got JSON-RPC error response 00:17:45.178 response: 00:17:45.178 { 00:17:45.178 "code": -32602, 00:17:45.178 "message": "Invalid cntlid range [0-65519]" 00:17:45.178 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:45.178 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29758 -i 65520 00:17:45.436 [2024-11-20 14:36:57.224873] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29758: invalid cntlid range [65520-65519] 00:17:45.436 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:45.436 { 00:17:45.436 "nqn": "nqn.2016-06.io.spdk:cnode29758", 00:17:45.436 "min_cntlid": 65520, 00:17:45.436 "method": "nvmf_create_subsystem", 00:17:45.436 "req_id": 1 00:17:45.436 } 00:17:45.436 Got JSON-RPC error response 00:17:45.436 response: 00:17:45.436 { 00:17:45.436 "code": -32602, 00:17:45.436 "message": "Invalid cntlid range [65520-65519]" 00:17:45.436 }' 00:17:45.436 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:45.436 { 00:17:45.436 "nqn": "nqn.2016-06.io.spdk:cnode29758", 00:17:45.436 "min_cntlid": 65520, 00:17:45.436 "method": "nvmf_create_subsystem", 00:17:45.436 "req_id": 1 00:17:45.436 } 00:17:45.436 Got JSON-RPC error response 00:17:45.436 response: 00:17:45.436 { 00:17:45.436 "code": -32602, 00:17:45.436 "message": "Invalid cntlid range [65520-65519]" 00:17:45.436 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:45.436 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2192 -I 0 00:17:45.693 [2024-11-20 14:36:57.433587] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2192: invalid cntlid range [1-0] 00:17:45.693 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:45.693 { 00:17:45.693 "nqn": "nqn.2016-06.io.spdk:cnode2192", 00:17:45.693 "max_cntlid": 0, 00:17:45.693 "method": "nvmf_create_subsystem", 00:17:45.693 "req_id": 1 00:17:45.693 } 00:17:45.693 Got JSON-RPC error response 00:17:45.693 response: 00:17:45.693 { 00:17:45.693 "code": -32602, 00:17:45.693 "message": "Invalid cntlid range [1-0]" 00:17:45.693 }' 00:17:45.693 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:45.693 { 00:17:45.693 "nqn": "nqn.2016-06.io.spdk:cnode2192", 00:17:45.693 "max_cntlid": 0, 00:17:45.693 "method": "nvmf_create_subsystem", 00:17:45.693 "req_id": 1 00:17:45.693 } 00:17:45.693 Got JSON-RPC error response 00:17:45.693 response: 00:17:45.693 { 00:17:45.693 "code": -32602, 00:17:45.693 "message": "Invalid cntlid range [1-0]" 00:17:45.693 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:45.693 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14906 -I 65520 00:17:45.693 [2024-11-20 14:36:57.646365] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14906: invalid cntlid range [1-65520] 00:17:45.951 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:45.951 { 00:17:45.951 "nqn": "nqn.2016-06.io.spdk:cnode14906", 00:17:45.951 "max_cntlid": 65520, 00:17:45.951 "method": "nvmf_create_subsystem", 00:17:45.951 "req_id": 1 00:17:45.951 } 00:17:45.951 Got JSON-RPC error response 00:17:45.951 response: 00:17:45.951 { 00:17:45.951 "code": -32602, 00:17:45.951 "message": "Invalid cntlid range [1-65520]" 00:17:45.951 }' 00:17:45.951 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:45.951 { 00:17:45.951 "nqn": "nqn.2016-06.io.spdk:cnode14906", 00:17:45.951 "max_cntlid": 65520, 00:17:45.951 "method": "nvmf_create_subsystem", 00:17:45.951 "req_id": 1 00:17:45.951 } 00:17:45.951 Got JSON-RPC error response 00:17:45.951 response: 00:17:45.951 { 00:17:45.951 "code": -32602, 00:17:45.951 "message": "Invalid cntlid range [1-65520]" 00:17:45.951 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:45.951 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26394 -i 6 -I 5 00:17:45.951 [2024-11-20 14:36:57.851102] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26394: invalid cntlid range [6-5] 00:17:45.951 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:45.951 { 00:17:45.951 "nqn": "nqn.2016-06.io.spdk:cnode26394", 00:17:45.951 "min_cntlid": 6, 00:17:45.951 "max_cntlid": 5, 00:17:45.951 "method": "nvmf_create_subsystem", 00:17:45.951 "req_id": 1 00:17:45.951 } 00:17:45.951 Got JSON-RPC error response 00:17:45.951 response: 00:17:45.951 { 00:17:45.951 "code": -32602, 00:17:45.951 "message": "Invalid cntlid range [6-5]" 00:17:45.951 }' 00:17:45.951 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:45.951 { 00:17:45.951 "nqn": "nqn.2016-06.io.spdk:cnode26394", 00:17:45.951 "min_cntlid": 6, 00:17:45.951 "max_cntlid": 5, 00:17:45.951 "method": "nvmf_create_subsystem", 00:17:45.951 "req_id": 1 00:17:45.951 } 00:17:45.951 Got JSON-RPC error response 00:17:45.951 response: 00:17:45.951 { 00:17:45.951 "code": -32602, 00:17:45.951 "message": "Invalid cntlid range [6-5]" 00:17:45.951 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:45.951 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:46.209 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:46.209 { 00:17:46.209 "name": "foobar", 00:17:46.209 "method": "nvmf_delete_target", 00:17:46.209 "req_id": 1 00:17:46.209 } 00:17:46.209 Got JSON-RPC error response 00:17:46.209 response: 00:17:46.209 { 00:17:46.209 "code": -32602, 00:17:46.209 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:46.209 }' 00:17:46.209 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:46.209 { 00:17:46.209 "name": "foobar", 00:17:46.209 "method": "nvmf_delete_target", 00:17:46.209 "req_id": 1 00:17:46.209 } 00:17:46.209 Got JSON-RPC error response 00:17:46.209 response: 00:17:46.209 { 00:17:46.209 "code": -32602, 00:17:46.209 "message": "The specified target doesn't exist, cannot delete it." 00:17:46.209 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:46.209 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:46.209 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:46.209 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:46.209 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:46.209 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:46.209 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:46.209 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:46.209 14:36:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:46.209 rmmod nvme_tcp 00:17:46.209 rmmod nvme_fabrics 00:17:46.209 rmmod nvme_keyring 00:17:46.209 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:46.209 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:46.209 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:46.209 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1535127 ']' 00:17:46.209 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1535127 00:17:46.209 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1535127 ']' 00:17:46.209 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1535127 00:17:46.209 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:46.209 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.209 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1535127 00:17:46.209 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:46.209 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:46.209 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1535127' 00:17:46.209 killing process with pid 1535127 00:17:46.209 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1535127 00:17:46.209 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1535127 00:17:46.469 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:46.469 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:46.469 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:46.469 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:46.469 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:46.469 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:46.469 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:46.469 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:46.469 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:46.469 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.469 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.469 14:36:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:49.002 00:17:49.002 real 0m12.044s 00:17:49.002 user 0m18.679s 00:17:49.002 sys 0m5.456s 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:49.002 ************************************ 00:17:49.002 END TEST nvmf_invalid 00:17:49.002 ************************************ 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:49.002 ************************************ 00:17:49.002 START TEST nvmf_connect_stress 00:17:49.002 ************************************ 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:49.002 * Looking for test storage... 00:17:49.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.002 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:49.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.003 --rc genhtml_branch_coverage=1 00:17:49.003 --rc genhtml_function_coverage=1 00:17:49.003 --rc genhtml_legend=1 00:17:49.003 --rc geninfo_all_blocks=1 00:17:49.003 --rc geninfo_unexecuted_blocks=1 00:17:49.003 00:17:49.003 ' 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:49.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.003 --rc genhtml_branch_coverage=1 00:17:49.003 --rc genhtml_function_coverage=1 00:17:49.003 --rc genhtml_legend=1 00:17:49.003 --rc geninfo_all_blocks=1 00:17:49.003 --rc geninfo_unexecuted_blocks=1 00:17:49.003 00:17:49.003 ' 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:49.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.003 --rc genhtml_branch_coverage=1 00:17:49.003 --rc genhtml_function_coverage=1 00:17:49.003 --rc genhtml_legend=1 00:17:49.003 --rc geninfo_all_blocks=1 00:17:49.003 --rc geninfo_unexecuted_blocks=1 00:17:49.003 00:17:49.003 ' 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:49.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.003 --rc genhtml_branch_coverage=1 00:17:49.003 --rc genhtml_function_coverage=1 00:17:49.003 --rc genhtml_legend=1 00:17:49.003 --rc geninfo_all_blocks=1 00:17:49.003 --rc geninfo_unexecuted_blocks=1 00:17:49.003 00:17:49.003 ' 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:49.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:49.003 14:37:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:54.490 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:54.490 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:54.490 Found net devices under 0000:86:00.0: cvl_0_0 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:54.490 Found net devices under 0000:86:00.1: cvl_0_1 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:54.490 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:54.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:17:54.749 00:17:54.749 --- 10.0.0.2 ping statistics --- 00:17:54.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.749 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:54.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:17:54.749 00:17:54.749 --- 10.0.0.1 ping statistics --- 00:17:54.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.749 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1539529 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1539529 00:17:54.749 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:54.750 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1539529 ']' 00:17:54.750 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.750 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.750 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.750 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.750 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.008 [2024-11-20 14:37:06.716517] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:17:55.008 [2024-11-20 14:37:06.716558] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.008 [2024-11-20 14:37:06.793614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:55.008 [2024-11-20 14:37:06.836291] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.008 [2024-11-20 14:37:06.836329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.008 [2024-11-20 14:37:06.836337] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:55.008 [2024-11-20 14:37:06.836343] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:55.008 [2024-11-20 14:37:06.836349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.008 [2024-11-20 14:37:06.837719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.008 [2024-11-20 14:37:06.837828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.008 [2024-11-20 14:37:06.837829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:55.008 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.008 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:55.008 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:55.008 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:55.008 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.266 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.266 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:55.266 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.266 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.266 [2024-11-20 14:37:06.983811] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:55.266 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.266 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:55.266 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.266 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.266 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.266 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:55.266 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.266 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.266 [2024-11-20 14:37:07.004074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.267 NULL1 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1539557 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.267 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.523 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.523 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:17:55.523 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.523 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.523 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.086 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.086 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:17:56.086 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.086 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.086 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.343 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.343 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:17:56.343 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.343 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.343 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.605 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.605 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:17:56.605 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.605 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.605 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.864 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.864 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:17:56.864 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.864 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.864 14:37:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.122 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.122 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:17:57.122 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.122 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.122 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.686 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.686 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:17:57.686 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.686 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.686 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.943 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.943 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:17:57.943 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.944 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.944 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.201 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.201 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:17:58.201 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.201 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.201 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.458 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.458 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:17:58.458 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.458 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.458 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.024 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.024 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:17:59.024 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.024 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.024 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.281 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.281 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:17:59.281 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.281 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.281 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.538 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.538 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:17:59.538 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.538 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.538 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.795 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.795 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:17:59.795 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.795 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.796 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.053 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.053 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:18:00.053 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.053 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.053 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.619 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.619 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:18:00.619 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.619 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.619 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.877 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.877 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:18:00.877 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.877 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.877 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.134 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.134 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:18:01.134 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.134 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.134 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.392 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.392 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:18:01.392 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.392 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.392 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.957 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.957 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:18:01.957 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.957 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.957 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.214 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.215 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:18:02.215 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:02.215 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.215 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.472 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.472 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:18:02.472 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:02.472 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.472 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.730 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.730 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:18:02.730 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:02.730 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.731 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.988 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.988 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:18:02.988 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:02.988 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.988 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:03.554 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.554 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:18:03.554 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:03.554 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.554 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:03.811 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.811 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:18:03.811 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:03.811 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.811 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:04.069 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.069 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:18:04.069 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:04.069 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.069 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:04.327 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.327 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:18:04.327 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:04.327 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.327 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:04.892 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.892 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:18:04.892 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:04.892 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.892 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:05.150 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.150 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:18:05.150 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:05.150 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.150 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:05.409 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1539557 00:18:05.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1539557) - No such process 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1539557 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:05.409 rmmod nvme_tcp 00:18:05.409 rmmod nvme_fabrics 00:18:05.409 rmmod nvme_keyring 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1539529 ']' 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1539529 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1539529 ']' 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1539529 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1539529 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1539529' 00:18:05.409 killing process with pid 1539529 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1539529 00:18:05.409 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1539529 00:18:05.668 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:05.668 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:05.668 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:05.668 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:05.668 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:18:05.668 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:18:05.668 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:05.668 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:05.668 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:05.668 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.668 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.668 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:08.206 00:18:08.206 real 0m19.154s 00:18:08.206 user 0m39.609s 00:18:08.206 sys 0m8.551s 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.206 ************************************ 00:18:08.206 END TEST nvmf_connect_stress 00:18:08.206 ************************************ 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:08.206 ************************************ 00:18:08.206 START TEST nvmf_fused_ordering 00:18:08.206 ************************************ 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:08.206 * Looking for test storage... 00:18:08.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:08.206 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:08.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.207 --rc genhtml_branch_coverage=1 00:18:08.207 --rc genhtml_function_coverage=1 00:18:08.207 --rc genhtml_legend=1 00:18:08.207 --rc geninfo_all_blocks=1 00:18:08.207 --rc geninfo_unexecuted_blocks=1 00:18:08.207 00:18:08.207 ' 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:08.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.207 --rc genhtml_branch_coverage=1 00:18:08.207 --rc genhtml_function_coverage=1 00:18:08.207 --rc genhtml_legend=1 00:18:08.207 --rc geninfo_all_blocks=1 00:18:08.207 --rc geninfo_unexecuted_blocks=1 00:18:08.207 00:18:08.207 ' 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:08.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.207 --rc genhtml_branch_coverage=1 00:18:08.207 --rc genhtml_function_coverage=1 00:18:08.207 --rc genhtml_legend=1 00:18:08.207 --rc geninfo_all_blocks=1 00:18:08.207 --rc geninfo_unexecuted_blocks=1 00:18:08.207 00:18:08.207 ' 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:08.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.207 --rc genhtml_branch_coverage=1 00:18:08.207 --rc genhtml_function_coverage=1 00:18:08.207 --rc genhtml_legend=1 00:18:08.207 --rc geninfo_all_blocks=1 00:18:08.207 --rc geninfo_unexecuted_blocks=1 00:18:08.207 00:18:08.207 ' 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:08.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:08.207 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:14.794 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:14.794 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:14.794 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:14.794 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:14.794 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:14.794 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:14.794 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:14.794 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:14.794 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:14.794 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:14.795 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:14.795 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:14.795 Found net devices under 0000:86:00.0: cvl_0_0 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:14.795 Found net devices under 0000:86:00.1: cvl_0_1 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:14.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:18:14.795 00:18:14.795 --- 10.0.0.2 ping statistics --- 00:18:14.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.795 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:14.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:18:14.795 00:18:14.795 --- 10.0.0.1 ping statistics --- 00:18:14.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.795 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:14.795 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:14.796 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.796 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:14.796 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:14.796 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:14.796 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:14.796 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.796 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:14.796 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1544741 00:18:14.796 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1544741 00:18:14.796 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:14.796 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1544741 ']' 00:18:14.796 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.796 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.796 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.796 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.796 14:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:14.796 [2024-11-20 14:37:25.874236] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:18:14.796 [2024-11-20 14:37:25.874282] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.796 [2024-11-20 14:37:25.954699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.796 [2024-11-20 14:37:25.995755] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.796 [2024-11-20 14:37:25.995793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.796 [2024-11-20 14:37:25.995800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.796 [2024-11-20 14:37:25.995806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.796 [2024-11-20 14:37:25.995812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.796 [2024-11-20 14:37:25.996395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:14.796 [2024-11-20 14:37:26.133410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:14.796 [2024-11-20 14:37:26.153597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:14.796 NULL1 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.796 14:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:14.796 [2024-11-20 14:37:26.212937] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:18:14.796 [2024-11-20 14:37:26.212988] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1544947 ] 00:18:14.796 Attached to nqn.2016-06.io.spdk:cnode1 00:18:14.796 Namespace ID: 1 size: 1GB 00:18:14.796 fused_ordering(0) 00:18:14.796 fused_ordering(1) 00:18:14.796 fused_ordering(2) 00:18:14.796 fused_ordering(3) 00:18:14.796 fused_ordering(4) 00:18:14.796 fused_ordering(5) 00:18:14.796 fused_ordering(6) 00:18:14.796 fused_ordering(7) 00:18:14.796 fused_ordering(8) 00:18:14.796 fused_ordering(9) 00:18:14.796 fused_ordering(10) 00:18:14.796 fused_ordering(11) 00:18:14.796 fused_ordering(12) 00:18:14.796 fused_ordering(13) 00:18:14.796 fused_ordering(14) 00:18:14.796 fused_ordering(15) 00:18:14.796 fused_ordering(16) 00:18:14.796 fused_ordering(17) 00:18:14.796 fused_ordering(18) 00:18:14.796 fused_ordering(19) 00:18:14.796 fused_ordering(20) 00:18:14.796 fused_ordering(21) 00:18:14.796 fused_ordering(22) 00:18:14.796 fused_ordering(23) 00:18:14.796 fused_ordering(24) 00:18:14.796 fused_ordering(25) 00:18:14.796 fused_ordering(26) 00:18:14.796 fused_ordering(27) 00:18:14.796 fused_ordering(28) 00:18:14.796 fused_ordering(29) 00:18:14.796 fused_ordering(30) 00:18:14.796 fused_ordering(31) 00:18:14.796 fused_ordering(32) 00:18:14.796 fused_ordering(33) 00:18:14.796 fused_ordering(34) 00:18:14.796 fused_ordering(35) 00:18:14.796 fused_ordering(36) 00:18:14.796 fused_ordering(37) 00:18:14.796 fused_ordering(38) 00:18:14.796 fused_ordering(39) 00:18:14.796 fused_ordering(40) 00:18:14.796 fused_ordering(41) 00:18:14.796 fused_ordering(42) 00:18:14.796 fused_ordering(43) 00:18:14.796 fused_ordering(44) 00:18:14.796 fused_ordering(45) 00:18:14.796 fused_ordering(46) 00:18:14.796 fused_ordering(47) 00:18:14.796 fused_ordering(48) 00:18:14.796 fused_ordering(49) 00:18:14.796 fused_ordering(50) 00:18:14.796 fused_ordering(51) 00:18:14.796 fused_ordering(52) 00:18:14.796 fused_ordering(53) 00:18:14.796 fused_ordering(54) 00:18:14.796 fused_ordering(55) 00:18:14.796 fused_ordering(56) 00:18:14.796 fused_ordering(57) 00:18:14.796 fused_ordering(58) 00:18:14.796 fused_ordering(59) 00:18:14.796 fused_ordering(60) 00:18:14.796 fused_ordering(61) 00:18:14.796 fused_ordering(62) 00:18:14.796 fused_ordering(63) 00:18:14.796 fused_ordering(64) 00:18:14.796 fused_ordering(65) 00:18:14.797 fused_ordering(66) 00:18:14.797 fused_ordering(67) 00:18:14.797 fused_ordering(68) 00:18:14.797 fused_ordering(69) 00:18:14.797 fused_ordering(70) 00:18:14.797 fused_ordering(71) 00:18:14.797 fused_ordering(72) 00:18:14.797 fused_ordering(73) 00:18:14.797 fused_ordering(74) 00:18:14.797 fused_ordering(75) 00:18:14.797 fused_ordering(76) 00:18:14.797 fused_ordering(77) 00:18:14.797 fused_ordering(78) 00:18:14.797 fused_ordering(79) 00:18:14.797 fused_ordering(80) 00:18:14.797 fused_ordering(81) 00:18:14.797 fused_ordering(82) 00:18:14.797 fused_ordering(83) 00:18:14.797 fused_ordering(84) 00:18:14.797 fused_ordering(85) 00:18:14.797 fused_ordering(86) 00:18:14.797 fused_ordering(87) 00:18:14.797 fused_ordering(88) 00:18:14.797 fused_ordering(89) 00:18:14.797 fused_ordering(90) 00:18:14.797 fused_ordering(91) 00:18:14.797 fused_ordering(92) 00:18:14.797 fused_ordering(93) 00:18:14.797 fused_ordering(94) 00:18:14.797 fused_ordering(95) 00:18:14.797 fused_ordering(96) 00:18:14.797 fused_ordering(97) 00:18:14.797 fused_ordering(98) 00:18:14.797 fused_ordering(99) 00:18:14.797 fused_ordering(100) 00:18:14.797 fused_ordering(101) 00:18:14.797 fused_ordering(102) 00:18:14.797 fused_ordering(103) 00:18:14.797 fused_ordering(104) 00:18:14.797 fused_ordering(105) 00:18:14.797 fused_ordering(106) 00:18:14.797 fused_ordering(107) 00:18:14.797 fused_ordering(108) 00:18:14.797 fused_ordering(109) 00:18:14.797 fused_ordering(110) 00:18:14.797 fused_ordering(111) 00:18:14.797 fused_ordering(112) 00:18:14.797 fused_ordering(113) 00:18:14.797 fused_ordering(114) 00:18:14.797 fused_ordering(115) 00:18:14.797 fused_ordering(116) 00:18:14.797 fused_ordering(117) 00:18:14.797 fused_ordering(118) 00:18:14.797 fused_ordering(119) 00:18:14.797 fused_ordering(120) 00:18:14.797 fused_ordering(121) 00:18:14.797 fused_ordering(122) 00:18:14.797 fused_ordering(123) 00:18:14.797 fused_ordering(124) 00:18:14.797 fused_ordering(125) 00:18:14.797 fused_ordering(126) 00:18:14.797 fused_ordering(127) 00:18:14.797 fused_ordering(128) 00:18:14.797 fused_ordering(129) 00:18:14.797 fused_ordering(130) 00:18:14.797 fused_ordering(131) 00:18:14.797 fused_ordering(132) 00:18:14.797 fused_ordering(133) 00:18:14.797 fused_ordering(134) 00:18:14.797 fused_ordering(135) 00:18:14.797 fused_ordering(136) 00:18:14.797 fused_ordering(137) 00:18:14.797 fused_ordering(138) 00:18:14.797 fused_ordering(139) 00:18:14.797 fused_ordering(140) 00:18:14.797 fused_ordering(141) 00:18:14.797 fused_ordering(142) 00:18:14.797 fused_ordering(143) 00:18:14.797 fused_ordering(144) 00:18:14.797 fused_ordering(145) 00:18:14.797 fused_ordering(146) 00:18:14.797 fused_ordering(147) 00:18:14.797 fused_ordering(148) 00:18:14.797 fused_ordering(149) 00:18:14.797 fused_ordering(150) 00:18:14.797 fused_ordering(151) 00:18:14.797 fused_ordering(152) 00:18:14.797 fused_ordering(153) 00:18:14.797 fused_ordering(154) 00:18:14.797 fused_ordering(155) 00:18:14.797 fused_ordering(156) 00:18:14.797 fused_ordering(157) 00:18:14.797 fused_ordering(158) 00:18:14.797 fused_ordering(159) 00:18:14.797 fused_ordering(160) 00:18:14.797 fused_ordering(161) 00:18:14.797 fused_ordering(162) 00:18:14.797 fused_ordering(163) 00:18:14.797 fused_ordering(164) 00:18:14.797 fused_ordering(165) 00:18:14.797 fused_ordering(166) 00:18:14.797 fused_ordering(167) 00:18:14.797 fused_ordering(168) 00:18:14.797 fused_ordering(169) 00:18:14.797 fused_ordering(170) 00:18:14.797 fused_ordering(171) 00:18:14.797 fused_ordering(172) 00:18:14.797 fused_ordering(173) 00:18:14.797 fused_ordering(174) 00:18:14.797 fused_ordering(175) 00:18:14.797 fused_ordering(176) 00:18:14.797 fused_ordering(177) 00:18:14.797 fused_ordering(178) 00:18:14.797 fused_ordering(179) 00:18:14.797 fused_ordering(180) 00:18:14.797 fused_ordering(181) 00:18:14.797 fused_ordering(182) 00:18:14.797 fused_ordering(183) 00:18:14.797 fused_ordering(184) 00:18:14.797 fused_ordering(185) 00:18:14.797 fused_ordering(186) 00:18:14.797 fused_ordering(187) 00:18:14.797 fused_ordering(188) 00:18:14.797 fused_ordering(189) 00:18:14.797 fused_ordering(190) 00:18:14.797 fused_ordering(191) 00:18:14.797 fused_ordering(192) 00:18:14.797 fused_ordering(193) 00:18:14.797 fused_ordering(194) 00:18:14.797 fused_ordering(195) 00:18:14.797 fused_ordering(196) 00:18:14.797 fused_ordering(197) 00:18:14.797 fused_ordering(198) 00:18:14.797 fused_ordering(199) 00:18:14.797 fused_ordering(200) 00:18:14.797 fused_ordering(201) 00:18:14.797 fused_ordering(202) 00:18:14.797 fused_ordering(203) 00:18:14.797 fused_ordering(204) 00:18:14.797 fused_ordering(205) 00:18:15.056 fused_ordering(206) 00:18:15.056 fused_ordering(207) 00:18:15.056 fused_ordering(208) 00:18:15.056 fused_ordering(209) 00:18:15.056 fused_ordering(210) 00:18:15.056 fused_ordering(211) 00:18:15.056 fused_ordering(212) 00:18:15.056 fused_ordering(213) 00:18:15.056 fused_ordering(214) 00:18:15.056 fused_ordering(215) 00:18:15.056 fused_ordering(216) 00:18:15.056 fused_ordering(217) 00:18:15.056 fused_ordering(218) 00:18:15.056 fused_ordering(219) 00:18:15.056 fused_ordering(220) 00:18:15.056 fused_ordering(221) 00:18:15.056 fused_ordering(222) 00:18:15.056 fused_ordering(223) 00:18:15.056 fused_ordering(224) 00:18:15.056 fused_ordering(225) 00:18:15.056 fused_ordering(226) 00:18:15.056 fused_ordering(227) 00:18:15.056 fused_ordering(228) 00:18:15.056 fused_ordering(229) 00:18:15.056 fused_ordering(230) 00:18:15.056 fused_ordering(231) 00:18:15.056 fused_ordering(232) 00:18:15.056 fused_ordering(233) 00:18:15.056 fused_ordering(234) 00:18:15.056 fused_ordering(235) 00:18:15.056 fused_ordering(236) 00:18:15.056 fused_ordering(237) 00:18:15.056 fused_ordering(238) 00:18:15.056 fused_ordering(239) 00:18:15.056 fused_ordering(240) 00:18:15.056 fused_ordering(241) 00:18:15.056 fused_ordering(242) 00:18:15.056 fused_ordering(243) 00:18:15.056 fused_ordering(244) 00:18:15.056 fused_ordering(245) 00:18:15.056 fused_ordering(246) 00:18:15.056 fused_ordering(247) 00:18:15.056 fused_ordering(248) 00:18:15.056 fused_ordering(249) 00:18:15.056 fused_ordering(250) 00:18:15.056 fused_ordering(251) 00:18:15.056 fused_ordering(252) 00:18:15.056 fused_ordering(253) 00:18:15.056 fused_ordering(254) 00:18:15.056 fused_ordering(255) 00:18:15.056 fused_ordering(256) 00:18:15.056 fused_ordering(257) 00:18:15.056 fused_ordering(258) 00:18:15.056 fused_ordering(259) 00:18:15.056 fused_ordering(260) 00:18:15.056 fused_ordering(261) 00:18:15.056 fused_ordering(262) 00:18:15.056 fused_ordering(263) 00:18:15.056 fused_ordering(264) 00:18:15.056 fused_ordering(265) 00:18:15.056 fused_ordering(266) 00:18:15.056 fused_ordering(267) 00:18:15.056 fused_ordering(268) 00:18:15.056 fused_ordering(269) 00:18:15.056 fused_ordering(270) 00:18:15.056 fused_ordering(271) 00:18:15.056 fused_ordering(272) 00:18:15.056 fused_ordering(273) 00:18:15.056 fused_ordering(274) 00:18:15.056 fused_ordering(275) 00:18:15.056 fused_ordering(276) 00:18:15.056 fused_ordering(277) 00:18:15.056 fused_ordering(278) 00:18:15.056 fused_ordering(279) 00:18:15.056 fused_ordering(280) 00:18:15.056 fused_ordering(281) 00:18:15.056 fused_ordering(282) 00:18:15.056 fused_ordering(283) 00:18:15.056 fused_ordering(284) 00:18:15.056 fused_ordering(285) 00:18:15.056 fused_ordering(286) 00:18:15.056 fused_ordering(287) 00:18:15.056 fused_ordering(288) 00:18:15.056 fused_ordering(289) 00:18:15.056 fused_ordering(290) 00:18:15.056 fused_ordering(291) 00:18:15.056 fused_ordering(292) 00:18:15.056 fused_ordering(293) 00:18:15.056 fused_ordering(294) 00:18:15.056 fused_ordering(295) 00:18:15.056 fused_ordering(296) 00:18:15.056 fused_ordering(297) 00:18:15.056 fused_ordering(298) 00:18:15.056 fused_ordering(299) 00:18:15.056 fused_ordering(300) 00:18:15.056 fused_ordering(301) 00:18:15.056 fused_ordering(302) 00:18:15.056 fused_ordering(303) 00:18:15.056 fused_ordering(304) 00:18:15.056 fused_ordering(305) 00:18:15.056 fused_ordering(306) 00:18:15.056 fused_ordering(307) 00:18:15.056 fused_ordering(308) 00:18:15.056 fused_ordering(309) 00:18:15.056 fused_ordering(310) 00:18:15.056 fused_ordering(311) 00:18:15.056 fused_ordering(312) 00:18:15.056 fused_ordering(313) 00:18:15.056 fused_ordering(314) 00:18:15.056 fused_ordering(315) 00:18:15.056 fused_ordering(316) 00:18:15.056 fused_ordering(317) 00:18:15.056 fused_ordering(318) 00:18:15.056 fused_ordering(319) 00:18:15.056 fused_ordering(320) 00:18:15.056 fused_ordering(321) 00:18:15.056 fused_ordering(322) 00:18:15.056 fused_ordering(323) 00:18:15.056 fused_ordering(324) 00:18:15.056 fused_ordering(325) 00:18:15.056 fused_ordering(326) 00:18:15.056 fused_ordering(327) 00:18:15.056 fused_ordering(328) 00:18:15.056 fused_ordering(329) 00:18:15.056 fused_ordering(330) 00:18:15.056 fused_ordering(331) 00:18:15.056 fused_ordering(332) 00:18:15.057 fused_ordering(333) 00:18:15.057 fused_ordering(334) 00:18:15.057 fused_ordering(335) 00:18:15.057 fused_ordering(336) 00:18:15.057 fused_ordering(337) 00:18:15.057 fused_ordering(338) 00:18:15.057 fused_ordering(339) 00:18:15.057 fused_ordering(340) 00:18:15.057 fused_ordering(341) 00:18:15.057 fused_ordering(342) 00:18:15.057 fused_ordering(343) 00:18:15.057 fused_ordering(344) 00:18:15.057 fused_ordering(345) 00:18:15.057 fused_ordering(346) 00:18:15.057 fused_ordering(347) 00:18:15.057 fused_ordering(348) 00:18:15.057 fused_ordering(349) 00:18:15.057 fused_ordering(350) 00:18:15.057 fused_ordering(351) 00:18:15.057 fused_ordering(352) 00:18:15.057 fused_ordering(353) 00:18:15.057 fused_ordering(354) 00:18:15.057 fused_ordering(355) 00:18:15.057 fused_ordering(356) 00:18:15.057 fused_ordering(357) 00:18:15.057 fused_ordering(358) 00:18:15.057 fused_ordering(359) 00:18:15.057 fused_ordering(360) 00:18:15.057 fused_ordering(361) 00:18:15.057 fused_ordering(362) 00:18:15.057 fused_ordering(363) 00:18:15.057 fused_ordering(364) 00:18:15.057 fused_ordering(365) 00:18:15.057 fused_ordering(366) 00:18:15.057 fused_ordering(367) 00:18:15.057 fused_ordering(368) 00:18:15.057 fused_ordering(369) 00:18:15.057 fused_ordering(370) 00:18:15.057 fused_ordering(371) 00:18:15.057 fused_ordering(372) 00:18:15.057 fused_ordering(373) 00:18:15.057 fused_ordering(374) 00:18:15.057 fused_ordering(375) 00:18:15.057 fused_ordering(376) 00:18:15.057 fused_ordering(377) 00:18:15.057 fused_ordering(378) 00:18:15.057 fused_ordering(379) 00:18:15.057 fused_ordering(380) 00:18:15.057 fused_ordering(381) 00:18:15.057 fused_ordering(382) 00:18:15.057 fused_ordering(383) 00:18:15.057 fused_ordering(384) 00:18:15.057 fused_ordering(385) 00:18:15.057 fused_ordering(386) 00:18:15.057 fused_ordering(387) 00:18:15.057 fused_ordering(388) 00:18:15.057 fused_ordering(389) 00:18:15.057 fused_ordering(390) 00:18:15.057 fused_ordering(391) 00:18:15.057 fused_ordering(392) 00:18:15.057 fused_ordering(393) 00:18:15.057 fused_ordering(394) 00:18:15.057 fused_ordering(395) 00:18:15.057 fused_ordering(396) 00:18:15.057 fused_ordering(397) 00:18:15.057 fused_ordering(398) 00:18:15.057 fused_ordering(399) 00:18:15.057 fused_ordering(400) 00:18:15.057 fused_ordering(401) 00:18:15.057 fused_ordering(402) 00:18:15.057 fused_ordering(403) 00:18:15.057 fused_ordering(404) 00:18:15.057 fused_ordering(405) 00:18:15.057 fused_ordering(406) 00:18:15.057 fused_ordering(407) 00:18:15.057 fused_ordering(408) 00:18:15.057 fused_ordering(409) 00:18:15.057 fused_ordering(410) 00:18:15.316 fused_ordering(411) 00:18:15.316 fused_ordering(412) 00:18:15.316 fused_ordering(413) 00:18:15.316 fused_ordering(414) 00:18:15.316 fused_ordering(415) 00:18:15.316 fused_ordering(416) 00:18:15.316 fused_ordering(417) 00:18:15.316 fused_ordering(418) 00:18:15.316 fused_ordering(419) 00:18:15.316 fused_ordering(420) 00:18:15.316 fused_ordering(421) 00:18:15.316 fused_ordering(422) 00:18:15.316 fused_ordering(423) 00:18:15.316 fused_ordering(424) 00:18:15.316 fused_ordering(425) 00:18:15.316 fused_ordering(426) 00:18:15.316 fused_ordering(427) 00:18:15.316 fused_ordering(428) 00:18:15.316 fused_ordering(429) 00:18:15.316 fused_ordering(430) 00:18:15.316 fused_ordering(431) 00:18:15.316 fused_ordering(432) 00:18:15.316 fused_ordering(433) 00:18:15.316 fused_ordering(434) 00:18:15.316 fused_ordering(435) 00:18:15.316 fused_ordering(436) 00:18:15.316 fused_ordering(437) 00:18:15.316 fused_ordering(438) 00:18:15.316 fused_ordering(439) 00:18:15.316 fused_ordering(440) 00:18:15.316 fused_ordering(441) 00:18:15.316 fused_ordering(442) 00:18:15.316 fused_ordering(443) 00:18:15.316 fused_ordering(444) 00:18:15.316 fused_ordering(445) 00:18:15.316 fused_ordering(446) 00:18:15.316 fused_ordering(447) 00:18:15.316 fused_ordering(448) 00:18:15.316 fused_ordering(449) 00:18:15.316 fused_ordering(450) 00:18:15.316 fused_ordering(451) 00:18:15.316 fused_ordering(452) 00:18:15.316 fused_ordering(453) 00:18:15.316 fused_ordering(454) 00:18:15.316 fused_ordering(455) 00:18:15.316 fused_ordering(456) 00:18:15.316 fused_ordering(457) 00:18:15.316 fused_ordering(458) 00:18:15.316 fused_ordering(459) 00:18:15.316 fused_ordering(460) 00:18:15.316 fused_ordering(461) 00:18:15.316 fused_ordering(462) 00:18:15.316 fused_ordering(463) 00:18:15.316 fused_ordering(464) 00:18:15.316 fused_ordering(465) 00:18:15.316 fused_ordering(466) 00:18:15.316 fused_ordering(467) 00:18:15.316 fused_ordering(468) 00:18:15.316 fused_ordering(469) 00:18:15.316 fused_ordering(470) 00:18:15.316 fused_ordering(471) 00:18:15.316 fused_ordering(472) 00:18:15.316 fused_ordering(473) 00:18:15.316 fused_ordering(474) 00:18:15.316 fused_ordering(475) 00:18:15.316 fused_ordering(476) 00:18:15.316 fused_ordering(477) 00:18:15.316 fused_ordering(478) 00:18:15.316 fused_ordering(479) 00:18:15.316 fused_ordering(480) 00:18:15.316 fused_ordering(481) 00:18:15.316 fused_ordering(482) 00:18:15.316 fused_ordering(483) 00:18:15.316 fused_ordering(484) 00:18:15.316 fused_ordering(485) 00:18:15.316 fused_ordering(486) 00:18:15.316 fused_ordering(487) 00:18:15.316 fused_ordering(488) 00:18:15.316 fused_ordering(489) 00:18:15.316 fused_ordering(490) 00:18:15.316 fused_ordering(491) 00:18:15.316 fused_ordering(492) 00:18:15.316 fused_ordering(493) 00:18:15.316 fused_ordering(494) 00:18:15.316 fused_ordering(495) 00:18:15.316 fused_ordering(496) 00:18:15.316 fused_ordering(497) 00:18:15.316 fused_ordering(498) 00:18:15.316 fused_ordering(499) 00:18:15.316 fused_ordering(500) 00:18:15.316 fused_ordering(501) 00:18:15.316 fused_ordering(502) 00:18:15.316 fused_ordering(503) 00:18:15.316 fused_ordering(504) 00:18:15.316 fused_ordering(505) 00:18:15.316 fused_ordering(506) 00:18:15.316 fused_ordering(507) 00:18:15.316 fused_ordering(508) 00:18:15.316 fused_ordering(509) 00:18:15.316 fused_ordering(510) 00:18:15.316 fused_ordering(511) 00:18:15.316 fused_ordering(512) 00:18:15.316 fused_ordering(513) 00:18:15.316 fused_ordering(514) 00:18:15.316 fused_ordering(515) 00:18:15.316 fused_ordering(516) 00:18:15.316 fused_ordering(517) 00:18:15.316 fused_ordering(518) 00:18:15.316 fused_ordering(519) 00:18:15.316 fused_ordering(520) 00:18:15.316 fused_ordering(521) 00:18:15.316 fused_ordering(522) 00:18:15.316 fused_ordering(523) 00:18:15.316 fused_ordering(524) 00:18:15.316 fused_ordering(525) 00:18:15.316 fused_ordering(526) 00:18:15.316 fused_ordering(527) 00:18:15.316 fused_ordering(528) 00:18:15.316 fused_ordering(529) 00:18:15.316 fused_ordering(530) 00:18:15.316 fused_ordering(531) 00:18:15.316 fused_ordering(532) 00:18:15.316 fused_ordering(533) 00:18:15.316 fused_ordering(534) 00:18:15.316 fused_ordering(535) 00:18:15.316 fused_ordering(536) 00:18:15.316 fused_ordering(537) 00:18:15.316 fused_ordering(538) 00:18:15.316 fused_ordering(539) 00:18:15.316 fused_ordering(540) 00:18:15.316 fused_ordering(541) 00:18:15.316 fused_ordering(542) 00:18:15.316 fused_ordering(543) 00:18:15.316 fused_ordering(544) 00:18:15.316 fused_ordering(545) 00:18:15.316 fused_ordering(546) 00:18:15.316 fused_ordering(547) 00:18:15.316 fused_ordering(548) 00:18:15.316 fused_ordering(549) 00:18:15.317 fused_ordering(550) 00:18:15.317 fused_ordering(551) 00:18:15.317 fused_ordering(552) 00:18:15.317 fused_ordering(553) 00:18:15.317 fused_ordering(554) 00:18:15.317 fused_ordering(555) 00:18:15.317 fused_ordering(556) 00:18:15.317 fused_ordering(557) 00:18:15.317 fused_ordering(558) 00:18:15.317 fused_ordering(559) 00:18:15.317 fused_ordering(560) 00:18:15.317 fused_ordering(561) 00:18:15.317 fused_ordering(562) 00:18:15.317 fused_ordering(563) 00:18:15.317 fused_ordering(564) 00:18:15.317 fused_ordering(565) 00:18:15.317 fused_ordering(566) 00:18:15.317 fused_ordering(567) 00:18:15.317 fused_ordering(568) 00:18:15.317 fused_ordering(569) 00:18:15.317 fused_ordering(570) 00:18:15.317 fused_ordering(571) 00:18:15.317 fused_ordering(572) 00:18:15.317 fused_ordering(573) 00:18:15.317 fused_ordering(574) 00:18:15.317 fused_ordering(575) 00:18:15.317 fused_ordering(576) 00:18:15.317 fused_ordering(577) 00:18:15.317 fused_ordering(578) 00:18:15.317 fused_ordering(579) 00:18:15.317 fused_ordering(580) 00:18:15.317 fused_ordering(581) 00:18:15.317 fused_ordering(582) 00:18:15.317 fused_ordering(583) 00:18:15.317 fused_ordering(584) 00:18:15.317 fused_ordering(585) 00:18:15.317 fused_ordering(586) 00:18:15.317 fused_ordering(587) 00:18:15.317 fused_ordering(588) 00:18:15.317 fused_ordering(589) 00:18:15.317 fused_ordering(590) 00:18:15.317 fused_ordering(591) 00:18:15.317 fused_ordering(592) 00:18:15.317 fused_ordering(593) 00:18:15.317 fused_ordering(594) 00:18:15.317 fused_ordering(595) 00:18:15.317 fused_ordering(596) 00:18:15.317 fused_ordering(597) 00:18:15.317 fused_ordering(598) 00:18:15.317 fused_ordering(599) 00:18:15.317 fused_ordering(600) 00:18:15.317 fused_ordering(601) 00:18:15.317 fused_ordering(602) 00:18:15.317 fused_ordering(603) 00:18:15.317 fused_ordering(604) 00:18:15.317 fused_ordering(605) 00:18:15.317 fused_ordering(606) 00:18:15.317 fused_ordering(607) 00:18:15.317 fused_ordering(608) 00:18:15.317 fused_ordering(609) 00:18:15.317 fused_ordering(610) 00:18:15.317 fused_ordering(611) 00:18:15.317 fused_ordering(612) 00:18:15.317 fused_ordering(613) 00:18:15.317 fused_ordering(614) 00:18:15.317 fused_ordering(615) 00:18:15.576 fused_ordering(616) 00:18:15.576 fused_ordering(617) 00:18:15.576 fused_ordering(618) 00:18:15.576 fused_ordering(619) 00:18:15.576 fused_ordering(620) 00:18:15.576 fused_ordering(621) 00:18:15.576 fused_ordering(622) 00:18:15.576 fused_ordering(623) 00:18:15.576 fused_ordering(624) 00:18:15.576 fused_ordering(625) 00:18:15.576 fused_ordering(626) 00:18:15.576 fused_ordering(627) 00:18:15.576 fused_ordering(628) 00:18:15.576 fused_ordering(629) 00:18:15.576 fused_ordering(630) 00:18:15.576 fused_ordering(631) 00:18:15.576 fused_ordering(632) 00:18:15.576 fused_ordering(633) 00:18:15.576 fused_ordering(634) 00:18:15.576 fused_ordering(635) 00:18:15.576 fused_ordering(636) 00:18:15.576 fused_ordering(637) 00:18:15.576 fused_ordering(638) 00:18:15.576 fused_ordering(639) 00:18:15.576 fused_ordering(640) 00:18:15.576 fused_ordering(641) 00:18:15.576 fused_ordering(642) 00:18:15.576 fused_ordering(643) 00:18:15.576 fused_ordering(644) 00:18:15.576 fused_ordering(645) 00:18:15.576 fused_ordering(646) 00:18:15.576 fused_ordering(647) 00:18:15.576 fused_ordering(648) 00:18:15.576 fused_ordering(649) 00:18:15.576 fused_ordering(650) 00:18:15.576 fused_ordering(651) 00:18:15.576 fused_ordering(652) 00:18:15.576 fused_ordering(653) 00:18:15.576 fused_ordering(654) 00:18:15.576 fused_ordering(655) 00:18:15.576 fused_ordering(656) 00:18:15.576 fused_ordering(657) 00:18:15.576 fused_ordering(658) 00:18:15.577 fused_ordering(659) 00:18:15.577 fused_ordering(660) 00:18:15.577 fused_ordering(661) 00:18:15.577 fused_ordering(662) 00:18:15.577 fused_ordering(663) 00:18:15.577 fused_ordering(664) 00:18:15.577 fused_ordering(665) 00:18:15.577 fused_ordering(666) 00:18:15.577 fused_ordering(667) 00:18:15.577 fused_ordering(668) 00:18:15.577 fused_ordering(669) 00:18:15.577 fused_ordering(670) 00:18:15.577 fused_ordering(671) 00:18:15.577 fused_ordering(672) 00:18:15.577 fused_ordering(673) 00:18:15.577 fused_ordering(674) 00:18:15.577 fused_ordering(675) 00:18:15.577 fused_ordering(676) 00:18:15.577 fused_ordering(677) 00:18:15.577 fused_ordering(678) 00:18:15.577 fused_ordering(679) 00:18:15.577 fused_ordering(680) 00:18:15.577 fused_ordering(681) 00:18:15.577 fused_ordering(682) 00:18:15.577 fused_ordering(683) 00:18:15.577 fused_ordering(684) 00:18:15.577 fused_ordering(685) 00:18:15.577 fused_ordering(686) 00:18:15.577 fused_ordering(687) 00:18:15.577 fused_ordering(688) 00:18:15.577 fused_ordering(689) 00:18:15.577 fused_ordering(690) 00:18:15.577 fused_ordering(691) 00:18:15.577 fused_ordering(692) 00:18:15.577 fused_ordering(693) 00:18:15.577 fused_ordering(694) 00:18:15.577 fused_ordering(695) 00:18:15.577 fused_ordering(696) 00:18:15.577 fused_ordering(697) 00:18:15.577 fused_ordering(698) 00:18:15.577 fused_ordering(699) 00:18:15.577 fused_ordering(700) 00:18:15.577 fused_ordering(701) 00:18:15.577 fused_ordering(702) 00:18:15.577 fused_ordering(703) 00:18:15.577 fused_ordering(704) 00:18:15.577 fused_ordering(705) 00:18:15.577 fused_ordering(706) 00:18:15.577 fused_ordering(707) 00:18:15.577 fused_ordering(708) 00:18:15.577 fused_ordering(709) 00:18:15.577 fused_ordering(710) 00:18:15.577 fused_ordering(711) 00:18:15.577 fused_ordering(712) 00:18:15.577 fused_ordering(713) 00:18:15.577 fused_ordering(714) 00:18:15.577 fused_ordering(715) 00:18:15.577 fused_ordering(716) 00:18:15.577 fused_ordering(717) 00:18:15.577 fused_ordering(718) 00:18:15.577 fused_ordering(719) 00:18:15.577 fused_ordering(720) 00:18:15.577 fused_ordering(721) 00:18:15.577 fused_ordering(722) 00:18:15.577 fused_ordering(723) 00:18:15.577 fused_ordering(724) 00:18:15.577 fused_ordering(725) 00:18:15.577 fused_ordering(726) 00:18:15.577 fused_ordering(727) 00:18:15.577 fused_ordering(728) 00:18:15.577 fused_ordering(729) 00:18:15.577 fused_ordering(730) 00:18:15.577 fused_ordering(731) 00:18:15.577 fused_ordering(732) 00:18:15.577 fused_ordering(733) 00:18:15.577 fused_ordering(734) 00:18:15.577 fused_ordering(735) 00:18:15.577 fused_ordering(736) 00:18:15.577 fused_ordering(737) 00:18:15.577 fused_ordering(738) 00:18:15.577 fused_ordering(739) 00:18:15.577 fused_ordering(740) 00:18:15.577 fused_ordering(741) 00:18:15.577 fused_ordering(742) 00:18:15.577 fused_ordering(743) 00:18:15.577 fused_ordering(744) 00:18:15.577 fused_ordering(745) 00:18:15.577 fused_ordering(746) 00:18:15.577 fused_ordering(747) 00:18:15.577 fused_ordering(748) 00:18:15.577 fused_ordering(749) 00:18:15.577 fused_ordering(750) 00:18:15.577 fused_ordering(751) 00:18:15.577 fused_ordering(752) 00:18:15.577 fused_ordering(753) 00:18:15.577 fused_ordering(754) 00:18:15.577 fused_ordering(755) 00:18:15.577 fused_ordering(756) 00:18:15.577 fused_ordering(757) 00:18:15.577 fused_ordering(758) 00:18:15.577 fused_ordering(759) 00:18:15.577 fused_ordering(760) 00:18:15.577 fused_ordering(761) 00:18:15.577 fused_ordering(762) 00:18:15.577 fused_ordering(763) 00:18:15.577 fused_ordering(764) 00:18:15.577 fused_ordering(765) 00:18:15.577 fused_ordering(766) 00:18:15.577 fused_ordering(767) 00:18:15.577 fused_ordering(768) 00:18:15.577 fused_ordering(769) 00:18:15.577 fused_ordering(770) 00:18:15.577 fused_ordering(771) 00:18:15.577 fused_ordering(772) 00:18:15.577 fused_ordering(773) 00:18:15.577 fused_ordering(774) 00:18:15.577 fused_ordering(775) 00:18:15.577 fused_ordering(776) 00:18:15.577 fused_ordering(777) 00:18:15.577 fused_ordering(778) 00:18:15.577 fused_ordering(779) 00:18:15.577 fused_ordering(780) 00:18:15.577 fused_ordering(781) 00:18:15.577 fused_ordering(782) 00:18:15.577 fused_ordering(783) 00:18:15.577 fused_ordering(784) 00:18:15.577 fused_ordering(785) 00:18:15.577 fused_ordering(786) 00:18:15.577 fused_ordering(787) 00:18:15.577 fused_ordering(788) 00:18:15.577 fused_ordering(789) 00:18:15.577 fused_ordering(790) 00:18:15.577 fused_ordering(791) 00:18:15.577 fused_ordering(792) 00:18:15.577 fused_ordering(793) 00:18:15.577 fused_ordering(794) 00:18:15.577 fused_ordering(795) 00:18:15.577 fused_ordering(796) 00:18:15.577 fused_ordering(797) 00:18:15.577 fused_ordering(798) 00:18:15.577 fused_ordering(799) 00:18:15.577 fused_ordering(800) 00:18:15.577 fused_ordering(801) 00:18:15.577 fused_ordering(802) 00:18:15.577 fused_ordering(803) 00:18:15.577 fused_ordering(804) 00:18:15.577 fused_ordering(805) 00:18:15.577 fused_ordering(806) 00:18:15.577 fused_ordering(807) 00:18:15.577 fused_ordering(808) 00:18:15.577 fused_ordering(809) 00:18:15.577 fused_ordering(810) 00:18:15.577 fused_ordering(811) 00:18:15.577 fused_ordering(812) 00:18:15.577 fused_ordering(813) 00:18:15.577 fused_ordering(814) 00:18:15.577 fused_ordering(815) 00:18:15.577 fused_ordering(816) 00:18:15.577 fused_ordering(817) 00:18:15.577 fused_ordering(818) 00:18:15.577 fused_ordering(819) 00:18:15.577 fused_ordering(820) 00:18:16.146 fused_ordering(821) 00:18:16.146 fused_ordering(822) 00:18:16.146 fused_ordering(823) 00:18:16.146 fused_ordering(824) 00:18:16.146 fused_ordering(825) 00:18:16.146 fused_ordering(826) 00:18:16.146 fused_ordering(827) 00:18:16.146 fused_ordering(828) 00:18:16.146 fused_ordering(829) 00:18:16.146 fused_ordering(830) 00:18:16.146 fused_ordering(831) 00:18:16.146 fused_ordering(832) 00:18:16.146 fused_ordering(833) 00:18:16.146 fused_ordering(834) 00:18:16.146 fused_ordering(835) 00:18:16.146 fused_ordering(836) 00:18:16.146 fused_ordering(837) 00:18:16.146 fused_ordering(838) 00:18:16.146 fused_ordering(839) 00:18:16.146 fused_ordering(840) 00:18:16.146 fused_ordering(841) 00:18:16.146 fused_ordering(842) 00:18:16.146 fused_ordering(843) 00:18:16.146 fused_ordering(844) 00:18:16.146 fused_ordering(845) 00:18:16.146 fused_ordering(846) 00:18:16.146 fused_ordering(847) 00:18:16.146 fused_ordering(848) 00:18:16.146 fused_ordering(849) 00:18:16.146 fused_ordering(850) 00:18:16.146 fused_ordering(851) 00:18:16.146 fused_ordering(852) 00:18:16.146 fused_ordering(853) 00:18:16.146 fused_ordering(854) 00:18:16.146 fused_ordering(855) 00:18:16.146 fused_ordering(856) 00:18:16.146 fused_ordering(857) 00:18:16.146 fused_ordering(858) 00:18:16.146 fused_ordering(859) 00:18:16.146 fused_ordering(860) 00:18:16.146 fused_ordering(861) 00:18:16.146 fused_ordering(862) 00:18:16.146 fused_ordering(863) 00:18:16.146 fused_ordering(864) 00:18:16.146 fused_ordering(865) 00:18:16.146 fused_ordering(866) 00:18:16.146 fused_ordering(867) 00:18:16.146 fused_ordering(868) 00:18:16.146 fused_ordering(869) 00:18:16.146 fused_ordering(870) 00:18:16.146 fused_ordering(871) 00:18:16.146 fused_ordering(872) 00:18:16.146 fused_ordering(873) 00:18:16.146 fused_ordering(874) 00:18:16.146 fused_ordering(875) 00:18:16.146 fused_ordering(876) 00:18:16.146 fused_ordering(877) 00:18:16.146 fused_ordering(878) 00:18:16.146 fused_ordering(879) 00:18:16.146 fused_ordering(880) 00:18:16.146 fused_ordering(881) 00:18:16.146 fused_ordering(882) 00:18:16.146 fused_ordering(883) 00:18:16.146 fused_ordering(884) 00:18:16.146 fused_ordering(885) 00:18:16.146 fused_ordering(886) 00:18:16.146 fused_ordering(887) 00:18:16.146 fused_ordering(888) 00:18:16.146 fused_ordering(889) 00:18:16.146 fused_ordering(890) 00:18:16.146 fused_ordering(891) 00:18:16.146 fused_ordering(892) 00:18:16.146 fused_ordering(893) 00:18:16.146 fused_ordering(894) 00:18:16.146 fused_ordering(895) 00:18:16.146 fused_ordering(896) 00:18:16.146 fused_ordering(897) 00:18:16.146 fused_ordering(898) 00:18:16.146 fused_ordering(899) 00:18:16.146 fused_ordering(900) 00:18:16.146 fused_ordering(901) 00:18:16.146 fused_ordering(902) 00:18:16.146 fused_ordering(903) 00:18:16.146 fused_ordering(904) 00:18:16.146 fused_ordering(905) 00:18:16.146 fused_ordering(906) 00:18:16.146 fused_ordering(907) 00:18:16.146 fused_ordering(908) 00:18:16.146 fused_ordering(909) 00:18:16.146 fused_ordering(910) 00:18:16.146 fused_ordering(911) 00:18:16.146 fused_ordering(912) 00:18:16.146 fused_ordering(913) 00:18:16.146 fused_ordering(914) 00:18:16.146 fused_ordering(915) 00:18:16.146 fused_ordering(916) 00:18:16.146 fused_ordering(917) 00:18:16.146 fused_ordering(918) 00:18:16.146 fused_ordering(919) 00:18:16.146 fused_ordering(920) 00:18:16.146 fused_ordering(921) 00:18:16.146 fused_ordering(922) 00:18:16.146 fused_ordering(923) 00:18:16.146 fused_ordering(924) 00:18:16.146 fused_ordering(925) 00:18:16.146 fused_ordering(926) 00:18:16.146 fused_ordering(927) 00:18:16.146 fused_ordering(928) 00:18:16.146 fused_ordering(929) 00:18:16.146 fused_ordering(930) 00:18:16.146 fused_ordering(931) 00:18:16.146 fused_ordering(932) 00:18:16.146 fused_ordering(933) 00:18:16.146 fused_ordering(934) 00:18:16.146 fused_ordering(935) 00:18:16.146 fused_ordering(936) 00:18:16.146 fused_ordering(937) 00:18:16.146 fused_ordering(938) 00:18:16.146 fused_ordering(939) 00:18:16.146 fused_ordering(940) 00:18:16.146 fused_ordering(941) 00:18:16.147 fused_ordering(942) 00:18:16.147 fused_ordering(943) 00:18:16.147 fused_ordering(944) 00:18:16.147 fused_ordering(945) 00:18:16.147 fused_ordering(946) 00:18:16.147 fused_ordering(947) 00:18:16.147 fused_ordering(948) 00:18:16.147 fused_ordering(949) 00:18:16.147 fused_ordering(950) 00:18:16.147 fused_ordering(951) 00:18:16.147 fused_ordering(952) 00:18:16.147 fused_ordering(953) 00:18:16.147 fused_ordering(954) 00:18:16.147 fused_ordering(955) 00:18:16.147 fused_ordering(956) 00:18:16.147 fused_ordering(957) 00:18:16.147 fused_ordering(958) 00:18:16.147 fused_ordering(959) 00:18:16.147 fused_ordering(960) 00:18:16.147 fused_ordering(961) 00:18:16.147 fused_ordering(962) 00:18:16.147 fused_ordering(963) 00:18:16.147 fused_ordering(964) 00:18:16.147 fused_ordering(965) 00:18:16.147 fused_ordering(966) 00:18:16.147 fused_ordering(967) 00:18:16.147 fused_ordering(968) 00:18:16.147 fused_ordering(969) 00:18:16.147 fused_ordering(970) 00:18:16.147 fused_ordering(971) 00:18:16.147 fused_ordering(972) 00:18:16.147 fused_ordering(973) 00:18:16.147 fused_ordering(974) 00:18:16.147 fused_ordering(975) 00:18:16.147 fused_ordering(976) 00:18:16.147 fused_ordering(977) 00:18:16.147 fused_ordering(978) 00:18:16.147 fused_ordering(979) 00:18:16.147 fused_ordering(980) 00:18:16.147 fused_ordering(981) 00:18:16.147 fused_ordering(982) 00:18:16.147 fused_ordering(983) 00:18:16.147 fused_ordering(984) 00:18:16.147 fused_ordering(985) 00:18:16.147 fused_ordering(986) 00:18:16.147 fused_ordering(987) 00:18:16.147 fused_ordering(988) 00:18:16.147 fused_ordering(989) 00:18:16.147 fused_ordering(990) 00:18:16.147 fused_ordering(991) 00:18:16.147 fused_ordering(992) 00:18:16.147 fused_ordering(993) 00:18:16.147 fused_ordering(994) 00:18:16.147 fused_ordering(995) 00:18:16.147 fused_ordering(996) 00:18:16.147 fused_ordering(997) 00:18:16.147 fused_ordering(998) 00:18:16.147 fused_ordering(999) 00:18:16.147 fused_ordering(1000) 00:18:16.147 fused_ordering(1001) 00:18:16.147 fused_ordering(1002) 00:18:16.147 fused_ordering(1003) 00:18:16.147 fused_ordering(1004) 00:18:16.147 fused_ordering(1005) 00:18:16.147 fused_ordering(1006) 00:18:16.147 fused_ordering(1007) 00:18:16.147 fused_ordering(1008) 00:18:16.147 fused_ordering(1009) 00:18:16.147 fused_ordering(1010) 00:18:16.147 fused_ordering(1011) 00:18:16.147 fused_ordering(1012) 00:18:16.147 fused_ordering(1013) 00:18:16.147 fused_ordering(1014) 00:18:16.147 fused_ordering(1015) 00:18:16.147 fused_ordering(1016) 00:18:16.147 fused_ordering(1017) 00:18:16.147 fused_ordering(1018) 00:18:16.147 fused_ordering(1019) 00:18:16.147 fused_ordering(1020) 00:18:16.147 fused_ordering(1021) 00:18:16.147 fused_ordering(1022) 00:18:16.147 fused_ordering(1023) 00:18:16.147 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:16.147 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:16.147 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:16.147 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:16.147 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:16.147 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:16.147 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:16.147 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:16.147 rmmod nvme_tcp 00:18:16.147 rmmod nvme_fabrics 00:18:16.147 rmmod nvme_keyring 00:18:16.147 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:16.147 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:16.147 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:16.147 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1544741 ']' 00:18:16.147 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1544741 00:18:16.147 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1544741 ']' 00:18:16.147 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1544741 00:18:16.147 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:18:16.147 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.147 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1544741 00:18:16.407 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:16.407 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:16.407 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1544741' 00:18:16.407 killing process with pid 1544741 00:18:16.407 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1544741 00:18:16.407 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1544741 00:18:16.407 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:16.407 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:16.407 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:16.407 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:16.407 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:16.407 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:16.407 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:16.407 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:16.407 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:16.407 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.407 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.407 14:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:18.942 00:18:18.942 real 0m10.731s 00:18:18.942 user 0m5.006s 00:18:18.942 sys 0m5.879s 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:18.942 ************************************ 00:18:18.942 END TEST nvmf_fused_ordering 00:18:18.942 ************************************ 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:18.942 ************************************ 00:18:18.942 START TEST nvmf_ns_masking 00:18:18.942 ************************************ 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:18.942 * Looking for test storage... 00:18:18.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:18.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.942 --rc genhtml_branch_coverage=1 00:18:18.942 --rc genhtml_function_coverage=1 00:18:18.942 --rc genhtml_legend=1 00:18:18.942 --rc geninfo_all_blocks=1 00:18:18.942 --rc geninfo_unexecuted_blocks=1 00:18:18.942 00:18:18.942 ' 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:18.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.942 --rc genhtml_branch_coverage=1 00:18:18.942 --rc genhtml_function_coverage=1 00:18:18.942 --rc genhtml_legend=1 00:18:18.942 --rc geninfo_all_blocks=1 00:18:18.942 --rc geninfo_unexecuted_blocks=1 00:18:18.942 00:18:18.942 ' 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:18.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.942 --rc genhtml_branch_coverage=1 00:18:18.942 --rc genhtml_function_coverage=1 00:18:18.942 --rc genhtml_legend=1 00:18:18.942 --rc geninfo_all_blocks=1 00:18:18.942 --rc geninfo_unexecuted_blocks=1 00:18:18.942 00:18:18.942 ' 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:18.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.942 --rc genhtml_branch_coverage=1 00:18:18.942 --rc genhtml_function_coverage=1 00:18:18.942 --rc genhtml_legend=1 00:18:18.942 --rc geninfo_all_blocks=1 00:18:18.942 --rc geninfo_unexecuted_blocks=1 00:18:18.942 00:18:18.942 ' 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:18.942 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:18.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3a06309d-d0a6-4c5f-b8de-cba36163d941 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=32583ff6-10e1-42f0-bd9d-5d07ce67d291 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=78e1b061-7fb5-49d7-b765-5d6fc07d8fe5 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:18.943 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:25.613 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:25.613 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:25.614 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:25.614 Found net devices under 0000:86:00.0: cvl_0_0 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:25.614 Found net devices under 0000:86:00.1: cvl_0_1 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:25.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:25.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:18:25.614 00:18:25.614 --- 10.0.0.2 ping statistics --- 00:18:25.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.614 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:25.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:25.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:18:25.614 00:18:25.614 --- 10.0.0.1 ping statistics --- 00:18:25.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.614 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1548717 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1548717 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1548717 ']' 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:25.614 [2024-11-20 14:37:36.734010] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:18:25.614 [2024-11-20 14:37:36.734053] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.614 [2024-11-20 14:37:36.811856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.614 [2024-11-20 14:37:36.852683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.614 [2024-11-20 14:37:36.852717] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.614 [2024-11-20 14:37:36.852725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.614 [2024-11-20 14:37:36.852731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.614 [2024-11-20 14:37:36.852737] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.614 [2024-11-20 14:37:36.853319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:25.614 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:25.615 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:25.615 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:25.615 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.615 14:37:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:25.615 [2024-11-20 14:37:37.157346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.615 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:25.615 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:25.615 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:25.615 Malloc1 00:18:25.615 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:25.874 Malloc2 00:18:25.874 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:26.133 14:37:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:26.133 14:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:26.391 [2024-11-20 14:37:38.208983] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.392 14:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:26.392 14:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 78e1b061-7fb5-49d7-b765-5d6fc07d8fe5 -a 10.0.0.2 -s 4420 -i 4 00:18:26.650 14:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:26.650 14:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:26.650 14:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:26.651 14:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:26.651 14:37:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:28.555 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:28.555 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:28.555 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:28.555 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:28.555 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:28.555 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:28.555 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:28.555 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:28.555 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:28.555 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:28.555 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:28.555 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:28.555 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:28.555 [ 0]:0x1 00:18:28.555 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:28.555 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:28.814 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2cff14cfe95c459c9e36b6d396be56d0 00:18:28.814 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2cff14cfe95c459c9e36b6d396be56d0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:28.814 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:28.814 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:28.814 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:28.814 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:28.814 [ 0]:0x1 00:18:28.814 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:28.814 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:29.072 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2cff14cfe95c459c9e36b6d396be56d0 00:18:29.073 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2cff14cfe95c459c9e36b6d396be56d0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:29.073 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:29.073 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:29.073 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:29.073 [ 1]:0x2 00:18:29.073 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:29.073 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:29.073 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1ce106bbd58e41c99c75b2b2400245fa 00:18:29.073 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1ce106bbd58e41c99c75b2b2400245fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:29.073 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:29.073 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:29.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:29.073 14:37:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:29.331 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:29.589 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:29.589 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 78e1b061-7fb5-49d7-b765-5d6fc07d8fe5 -a 10.0.0.2 -s 4420 -i 4 00:18:29.589 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:29.589 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:29.589 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:29.589 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:29.589 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:29.589 14:37:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:32.123 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:32.123 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:32.123 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:32.123 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:32.123 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:32.123 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:32.123 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:32.123 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:32.123 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:32.124 [ 0]:0x2 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1ce106bbd58e41c99c75b2b2400245fa 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1ce106bbd58e41c99c75b2b2400245fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:32.124 [ 0]:0x1 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2cff14cfe95c459c9e36b6d396be56d0 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2cff14cfe95c459c9e36b6d396be56d0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:32.124 [ 1]:0x2 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1ce106bbd58e41c99c75b2b2400245fa 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1ce106bbd58e41c99c75b2b2400245fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:32.124 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:32.383 [ 0]:0x2 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1ce106bbd58e41c99c75b2b2400245fa 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1ce106bbd58e41c99c75b2b2400245fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:32.383 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:32.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:32.642 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:32.642 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:32.642 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 78e1b061-7fb5-49d7-b765-5d6fc07d8fe5 -a 10.0.0.2 -s 4420 -i 4 00:18:32.901 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:32.901 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:32.901 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:32.901 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:32.901 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:32.901 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:35.437 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:35.438 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:35.438 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:35.438 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:35.438 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:35.438 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:35.438 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:35.438 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:35.438 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:35.438 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:35.438 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:35.438 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:35.438 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:35.438 [ 0]:0x1 00:18:35.438 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:35.438 14:37:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2cff14cfe95c459c9e36b6d396be56d0 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2cff14cfe95c459c9e36b6d396be56d0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:35.438 [ 1]:0x2 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1ce106bbd58e41c99c75b2b2400245fa 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1ce106bbd58e41c99c75b2b2400245fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:35.438 [ 0]:0x2 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:35.438 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1ce106bbd58e41c99c75b2b2400245fa 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1ce106bbd58e41c99c75b2b2400245fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:35.697 [2024-11-20 14:37:47.615591] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:35.697 request: 00:18:35.697 { 00:18:35.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.697 "nsid": 2, 00:18:35.697 "host": "nqn.2016-06.io.spdk:host1", 00:18:35.697 "method": "nvmf_ns_remove_host", 00:18:35.697 "req_id": 1 00:18:35.697 } 00:18:35.697 Got JSON-RPC error response 00:18:35.697 response: 00:18:35.697 { 00:18:35.697 "code": -32602, 00:18:35.697 "message": "Invalid parameters" 00:18:35.697 } 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:35.697 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:35.956 [ 0]:0x2 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1ce106bbd58e41c99c75b2b2400245fa 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1ce106bbd58e41c99c75b2b2400245fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:35.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1550712 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1550712 /var/tmp/host.sock 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1550712 ']' 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:35.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.956 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:35.956 [2024-11-20 14:37:47.842556] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:18:35.956 [2024-11-20 14:37:47.842605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550712 ] 00:18:36.215 [2024-11-20 14:37:47.917437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.215 [2024-11-20 14:37:47.959814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.474 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.474 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:36.474 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:36.474 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:36.732 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3a06309d-d0a6-4c5f-b8de-cba36163d941 00:18:36.732 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:36.733 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3A06309DD0A64C5FB8DECBA36163D941 -i 00:18:36.992 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 32583ff6-10e1-42f0-bd9d-5d07ce67d291 00:18:36.992 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:36.992 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 32583FF610E142F0BD9D5D07CE67D291 -i 00:18:37.251 14:37:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:37.251 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:37.509 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:37.510 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:37.768 nvme0n1 00:18:37.768 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:37.768 14:37:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:38.336 nvme1n2 00:18:38.336 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:38.336 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:38.336 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:38.336 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:38.336 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:38.336 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:38.336 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:38.336 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:38.336 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:38.595 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3a06309d-d0a6-4c5f-b8de-cba36163d941 == \3\a\0\6\3\0\9\d\-\d\0\a\6\-\4\c\5\f\-\b\8\d\e\-\c\b\a\3\6\1\6\3\d\9\4\1 ]] 00:18:38.595 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:38.595 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:38.595 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:38.854 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 32583ff6-10e1-42f0-bd9d-5d07ce67d291 == \3\2\5\8\3\f\f\6\-\1\0\e\1\-\4\2\f\0\-\b\d\9\d\-\5\d\0\7\c\e\6\7\d\2\9\1 ]] 00:18:38.854 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:39.114 14:37:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:39.114 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 3a06309d-d0a6-4c5f-b8de-cba36163d941 00:18:39.114 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:39.114 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3A06309DD0A64C5FB8DECBA36163D941 00:18:39.114 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:39.114 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3A06309DD0A64C5FB8DECBA36163D941 00:18:39.114 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:39.114 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.114 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:39.114 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.115 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:39.115 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.115 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:39.115 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:39.115 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3A06309DD0A64C5FB8DECBA36163D941 00:18:39.375 [2024-11-20 14:37:51.221540] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:39.375 [2024-11-20 14:37:51.221575] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:39.375 [2024-11-20 14:37:51.221584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.375 request: 00:18:39.375 { 00:18:39.375 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.375 "namespace": { 00:18:39.375 "bdev_name": "invalid", 00:18:39.375 "nsid": 1, 00:18:39.375 "nguid": "3A06309DD0A64C5FB8DECBA36163D941", 00:18:39.375 "no_auto_visible": false, 00:18:39.375 "hide_metadata": false 00:18:39.375 }, 00:18:39.375 "method": "nvmf_subsystem_add_ns", 00:18:39.375 "req_id": 1 00:18:39.375 } 00:18:39.375 Got JSON-RPC error response 00:18:39.375 response: 00:18:39.375 { 00:18:39.375 "code": -32602, 00:18:39.375 "message": "Invalid parameters" 00:18:39.375 } 00:18:39.375 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:39.375 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.375 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.375 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.375 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 3a06309d-d0a6-4c5f-b8de-cba36163d941 00:18:39.375 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:39.375 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3A06309DD0A64C5FB8DECBA36163D941 -i 00:18:39.636 14:37:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:41.541 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:41.541 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:41.541 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:41.800 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:41.800 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1550712 00:18:41.800 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1550712 ']' 00:18:41.800 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1550712 00:18:41.800 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:41.801 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.801 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1550712 00:18:41.801 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:41.801 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:41.801 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1550712' 00:18:41.801 killing process with pid 1550712 00:18:41.801 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1550712 00:18:41.801 14:37:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1550712 00:18:42.060 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:42.319 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:42.319 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:42.319 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:42.319 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:42.319 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:42.319 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:42.319 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:42.319 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:42.319 rmmod nvme_tcp 00:18:42.319 rmmod nvme_fabrics 00:18:42.319 rmmod nvme_keyring 00:18:42.578 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:42.578 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:42.578 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:42.578 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1548717 ']' 00:18:42.578 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1548717 00:18:42.578 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1548717 ']' 00:18:42.578 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1548717 00:18:42.578 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:42.578 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.578 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1548717 00:18:42.578 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:42.578 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:42.578 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1548717' 00:18:42.578 killing process with pid 1548717 00:18:42.578 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1548717 00:18:42.578 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1548717 00:18:42.838 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:42.838 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:42.838 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:42.838 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:42.838 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:42.838 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:42.838 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:42.838 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:42.838 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:42.838 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.838 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.838 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.743 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:44.743 00:18:44.743 real 0m26.182s 00:18:44.743 user 0m31.279s 00:18:44.743 sys 0m7.120s 00:18:44.743 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.743 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:44.743 ************************************ 00:18:44.743 END TEST nvmf_ns_masking 00:18:44.743 ************************************ 00:18:44.744 14:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:44.744 14:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:44.744 14:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:44.744 14:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.744 14:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:45.003 ************************************ 00:18:45.003 START TEST nvmf_nvme_cli 00:18:45.003 ************************************ 00:18:45.003 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:45.003 * Looking for test storage... 00:18:45.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:45.003 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:45.003 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:45.003 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:18:45.003 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:45.003 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:45.003 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.003 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.003 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.003 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.003 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.003 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.003 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:45.003 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:45.003 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:45.003 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.003 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:45.003 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:45.003 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:45.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.004 --rc genhtml_branch_coverage=1 00:18:45.004 --rc genhtml_function_coverage=1 00:18:45.004 --rc genhtml_legend=1 00:18:45.004 --rc geninfo_all_blocks=1 00:18:45.004 --rc geninfo_unexecuted_blocks=1 00:18:45.004 00:18:45.004 ' 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:45.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.004 --rc genhtml_branch_coverage=1 00:18:45.004 --rc genhtml_function_coverage=1 00:18:45.004 --rc genhtml_legend=1 00:18:45.004 --rc geninfo_all_blocks=1 00:18:45.004 --rc geninfo_unexecuted_blocks=1 00:18:45.004 00:18:45.004 ' 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:45.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.004 --rc genhtml_branch_coverage=1 00:18:45.004 --rc genhtml_function_coverage=1 00:18:45.004 --rc genhtml_legend=1 00:18:45.004 --rc geninfo_all_blocks=1 00:18:45.004 --rc geninfo_unexecuted_blocks=1 00:18:45.004 00:18:45.004 ' 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:45.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.004 --rc genhtml_branch_coverage=1 00:18:45.004 --rc genhtml_function_coverage=1 00:18:45.004 --rc genhtml_legend=1 00:18:45.004 --rc geninfo_all_blocks=1 00:18:45.004 --rc geninfo_unexecuted_blocks=1 00:18:45.004 00:18:45.004 ' 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:45.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:45.004 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:51.573 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:51.573 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:51.573 Found net devices under 0000:86:00.0: cvl_0_0 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.573 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:51.574 Found net devices under 0000:86:00.1: cvl_0_1 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:51.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:18:51.574 00:18:51.574 --- 10.0.0.2 ping statistics --- 00:18:51.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.574 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:51.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:18:51.574 00:18:51.574 --- 10.0.0.1 ping statistics --- 00:18:51.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.574 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1555306 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1555306 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1555306 ']' 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.574 14:38:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.574 [2024-11-20 14:38:02.890698] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:18:51.574 [2024-11-20 14:38:02.890752] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.574 [2024-11-20 14:38:02.970536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:51.574 [2024-11-20 14:38:03.015687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.574 [2024-11-20 14:38:03.015725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.574 [2024-11-20 14:38:03.015732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.574 [2024-11-20 14:38:03.015739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.574 [2024-11-20 14:38:03.015744] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.574 [2024-11-20 14:38:03.017217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.574 [2024-11-20 14:38:03.017330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.574 [2024-11-20 14:38:03.017355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.574 [2024-11-20 14:38:03.017356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.574 [2024-11-20 14:38:03.155771] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.574 Malloc0 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.574 Malloc1 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.574 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.575 [2024-11-20 14:38:03.254704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:18:51.575 00:18:51.575 Discovery Log Number of Records 2, Generation counter 2 00:18:51.575 =====Discovery Log Entry 0====== 00:18:51.575 trtype: tcp 00:18:51.575 adrfam: ipv4 00:18:51.575 subtype: current discovery subsystem 00:18:51.575 treq: not required 00:18:51.575 portid: 0 00:18:51.575 trsvcid: 4420 00:18:51.575 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:51.575 traddr: 10.0.0.2 00:18:51.575 eflags: explicit discovery connections, duplicate discovery information 00:18:51.575 sectype: none 00:18:51.575 =====Discovery Log Entry 1====== 00:18:51.575 trtype: tcp 00:18:51.575 adrfam: ipv4 00:18:51.575 subtype: nvme subsystem 00:18:51.575 treq: not required 00:18:51.575 portid: 0 00:18:51.575 trsvcid: 4420 00:18:51.575 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:51.575 traddr: 10.0.0.2 00:18:51.575 eflags: none 00:18:51.575 sectype: none 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:51.575 14:38:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:52.950 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:52.950 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:52.950 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:52.950 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:52.950 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:52.950 14:38:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:54.850 /dev/nvme0n2 ]] 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:54.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.850 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:55.109 rmmod nvme_tcp 00:18:55.109 rmmod nvme_fabrics 00:18:55.109 rmmod nvme_keyring 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1555306 ']' 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1555306 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1555306 ']' 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1555306 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1555306 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1555306' 00:18:55.109 killing process with pid 1555306 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1555306 00:18:55.109 14:38:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1555306 00:18:55.369 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:55.369 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:55.369 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:55.369 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:55.369 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:55.369 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:55.369 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:55.369 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:55.369 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:55.369 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.369 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.369 14:38:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.275 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:57.275 00:18:57.275 real 0m12.495s 00:18:57.275 user 0m17.962s 00:18:57.275 sys 0m5.163s 00:18:57.275 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:57.275 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:57.275 ************************************ 00:18:57.275 END TEST nvmf_nvme_cli 00:18:57.275 ************************************ 00:18:57.534 14:38:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:57.535 ************************************ 00:18:57.535 START TEST nvmf_vfio_user 00:18:57.535 ************************************ 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:57.535 * Looking for test storage... 00:18:57.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:57.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.535 --rc genhtml_branch_coverage=1 00:18:57.535 --rc genhtml_function_coverage=1 00:18:57.535 --rc genhtml_legend=1 00:18:57.535 --rc geninfo_all_blocks=1 00:18:57.535 --rc geninfo_unexecuted_blocks=1 00:18:57.535 00:18:57.535 ' 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:57.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.535 --rc genhtml_branch_coverage=1 00:18:57.535 --rc genhtml_function_coverage=1 00:18:57.535 --rc genhtml_legend=1 00:18:57.535 --rc geninfo_all_blocks=1 00:18:57.535 --rc geninfo_unexecuted_blocks=1 00:18:57.535 00:18:57.535 ' 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:57.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.535 --rc genhtml_branch_coverage=1 00:18:57.535 --rc genhtml_function_coverage=1 00:18:57.535 --rc genhtml_legend=1 00:18:57.535 --rc geninfo_all_blocks=1 00:18:57.535 --rc geninfo_unexecuted_blocks=1 00:18:57.535 00:18:57.535 ' 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:57.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.535 --rc genhtml_branch_coverage=1 00:18:57.535 --rc genhtml_function_coverage=1 00:18:57.535 --rc genhtml_legend=1 00:18:57.535 --rc geninfo_all_blocks=1 00:18:57.535 --rc geninfo_unexecuted_blocks=1 00:18:57.535 00:18:57.535 ' 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.535 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.536 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.536 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:57.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:57.536 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:57.536 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:57.536 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:57.536 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:57.536 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:57.536 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:57.536 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:57.536 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:57.536 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:57.536 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:57.795 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:57.795 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:57.795 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:57.795 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1556504 00:18:57.795 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1556504' 00:18:57.795 Process pid: 1556504 00:18:57.795 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:57.795 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1556504 00:18:57.795 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:57.795 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1556504 ']' 00:18:57.795 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.795 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.795 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.795 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.795 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:57.795 [2024-11-20 14:38:09.543141] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:18:57.795 [2024-11-20 14:38:09.543190] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.795 [2024-11-20 14:38:09.616775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:57.795 [2024-11-20 14:38:09.659574] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.795 [2024-11-20 14:38:09.659612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.795 [2024-11-20 14:38:09.659619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.795 [2024-11-20 14:38:09.659625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.795 [2024-11-20 14:38:09.659631] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.795 [2024-11-20 14:38:09.661053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.795 [2024-11-20 14:38:09.661163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.795 [2024-11-20 14:38:09.661269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.795 [2024-11-20 14:38:09.661270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.052 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.052 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:58.052 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:58.985 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:59.243 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:59.243 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:59.243 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:59.243 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:59.243 14:38:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:59.243 Malloc1 00:18:59.501 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:59.501 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:59.758 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:00.016 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:00.016 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:00.016 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:00.273 Malloc2 00:19:00.273 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:00.273 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:00.531 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:00.791 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:19:00.791 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:19:00.791 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:00.791 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:00.791 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:19:00.791 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:00.791 [2024-11-20 14:38:12.638907] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:19:00.791 [2024-11-20 14:38:12.638939] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557024 ] 00:19:00.791 [2024-11-20 14:38:12.680800] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:19:00.791 [2024-11-20 14:38:12.685180] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:00.791 [2024-11-20 14:38:12.685204] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8c7028d000 00:19:00.791 [2024-11-20 14:38:12.686177] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:00.791 [2024-11-20 14:38:12.687180] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:00.791 [2024-11-20 14:38:12.688180] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:00.791 [2024-11-20 14:38:12.689183] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:00.791 [2024-11-20 14:38:12.690181] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:00.791 [2024-11-20 14:38:12.691194] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:00.791 [2024-11-20 14:38:12.692203] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:00.791 [2024-11-20 14:38:12.693214] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:00.792 [2024-11-20 14:38:12.694218] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:00.792 [2024-11-20 14:38:12.694228] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8c70282000 00:19:00.792 [2024-11-20 14:38:12.695171] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:00.792 [2024-11-20 14:38:12.704782] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:19:00.792 [2024-11-20 14:38:12.704808] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:19:00.792 [2024-11-20 14:38:12.709303] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:00.792 [2024-11-20 14:38:12.709340] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:00.792 [2024-11-20 14:38:12.709407] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:19:00.792 [2024-11-20 14:38:12.709421] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:19:00.792 [2024-11-20 14:38:12.709426] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:19:00.792 [2024-11-20 14:38:12.710299] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:19:00.792 [2024-11-20 14:38:12.710309] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:19:00.792 [2024-11-20 14:38:12.710316] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:19:00.792 [2024-11-20 14:38:12.711304] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:00.792 [2024-11-20 14:38:12.711312] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:19:00.792 [2024-11-20 14:38:12.711319] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:19:00.792 [2024-11-20 14:38:12.712306] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:19:00.792 [2024-11-20 14:38:12.712315] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:00.792 [2024-11-20 14:38:12.713311] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:19:00.792 [2024-11-20 14:38:12.713320] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:19:00.792 [2024-11-20 14:38:12.713325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:19:00.792 [2024-11-20 14:38:12.713331] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:00.792 [2024-11-20 14:38:12.713439] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:19:00.792 [2024-11-20 14:38:12.713444] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:00.792 [2024-11-20 14:38:12.713449] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:19:00.792 [2024-11-20 14:38:12.714315] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:19:00.792 [2024-11-20 14:38:12.715319] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:19:00.792 [2024-11-20 14:38:12.716324] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:00.792 [2024-11-20 14:38:12.717324] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:00.792 [2024-11-20 14:38:12.717391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:00.792 [2024-11-20 14:38:12.718334] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:19:00.792 [2024-11-20 14:38:12.718343] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:00.792 [2024-11-20 14:38:12.718348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:19:00.792 [2024-11-20 14:38:12.718365] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:19:00.792 [2024-11-20 14:38:12.718375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:19:00.792 [2024-11-20 14:38:12.718388] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:00.792 [2024-11-20 14:38:12.718394] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:00.792 [2024-11-20 14:38:12.718397] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:00.792 [2024-11-20 14:38:12.718410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:00.792 [2024-11-20 14:38:12.718453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:00.792 [2024-11-20 14:38:12.718462] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:19:00.792 [2024-11-20 14:38:12.718466] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:19:00.792 [2024-11-20 14:38:12.718470] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:19:00.792 [2024-11-20 14:38:12.718475] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:00.792 [2024-11-20 14:38:12.718483] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:19:00.792 [2024-11-20 14:38:12.718487] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:19:00.792 [2024-11-20 14:38:12.718492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:19:00.792 [2024-11-20 14:38:12.718500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:19:00.792 [2024-11-20 14:38:12.718510] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:00.792 [2024-11-20 14:38:12.718522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:00.792 [2024-11-20 14:38:12.718532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:00.792 [2024-11-20 14:38:12.718539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:00.792 [2024-11-20 14:38:12.718547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:00.792 [2024-11-20 14:38:12.718556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:00.792 [2024-11-20 14:38:12.718561] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:19:00.792 [2024-11-20 14:38:12.718567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:00.792 [2024-11-20 14:38:12.718576] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:00.792 [2024-11-20 14:38:12.718585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:00.792 [2024-11-20 14:38:12.718592] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:19:00.792 [2024-11-20 14:38:12.718597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:00.792 [2024-11-20 14:38:12.718603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:19:00.792 [2024-11-20 14:38:12.718608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:19:00.792 [2024-11-20 14:38:12.718617] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:00.792 [2024-11-20 14:38:12.718632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:00.792 [2024-11-20 14:38:12.718684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:19:00.792 [2024-11-20 14:38:12.718691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:19:00.792 [2024-11-20 14:38:12.718698] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:00.792 [2024-11-20 14:38:12.718702] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:00.793 [2024-11-20 14:38:12.718705] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:00.793 [2024-11-20 14:38:12.718711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:00.793 [2024-11-20 14:38:12.718721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:00.793 [2024-11-20 14:38:12.718729] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:19:00.793 [2024-11-20 14:38:12.718736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:19:00.793 [2024-11-20 14:38:12.718744] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:19:00.793 [2024-11-20 14:38:12.718750] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:00.793 [2024-11-20 14:38:12.718754] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:00.793 [2024-11-20 14:38:12.718757] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:00.793 [2024-11-20 14:38:12.718763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:00.793 [2024-11-20 14:38:12.718783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:00.793 [2024-11-20 14:38:12.718794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:00.793 [2024-11-20 14:38:12.718802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:00.793 [2024-11-20 14:38:12.718808] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:00.793 [2024-11-20 14:38:12.718812] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:00.793 [2024-11-20 14:38:12.718815] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:00.793 [2024-11-20 14:38:12.718820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:00.793 [2024-11-20 14:38:12.718832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:00.793 [2024-11-20 14:38:12.718839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:00.793 [2024-11-20 14:38:12.718845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:19:00.793 [2024-11-20 14:38:12.718852] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:19:00.793 [2024-11-20 14:38:12.718857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:19:00.793 [2024-11-20 14:38:12.718862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:00.793 [2024-11-20 14:38:12.718866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:19:00.793 [2024-11-20 14:38:12.718871] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:19:00.793 [2024-11-20 14:38:12.718875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:19:00.793 [2024-11-20 14:38:12.718880] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:19:00.793 [2024-11-20 14:38:12.718896] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:00.793 [2024-11-20 14:38:12.718904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:00.793 [2024-11-20 14:38:12.718915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:00.793 [2024-11-20 14:38:12.718923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:00.793 [2024-11-20 14:38:12.718933] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:00.793 [2024-11-20 14:38:12.718944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:00.793 [2024-11-20 14:38:12.718959] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:00.793 [2024-11-20 14:38:12.718974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:00.793 [2024-11-20 14:38:12.718985] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:00.793 [2024-11-20 14:38:12.718992] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:00.793 [2024-11-20 14:38:12.718995] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:00.793 [2024-11-20 14:38:12.718998] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:00.793 [2024-11-20 14:38:12.719001] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:00.793 [2024-11-20 14:38:12.719006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:00.793 [2024-11-20 14:38:12.719013] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:00.793 [2024-11-20 14:38:12.719017] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:00.793 [2024-11-20 14:38:12.719020] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:00.793 [2024-11-20 14:38:12.719026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:00.793 [2024-11-20 14:38:12.719033] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:00.793 [2024-11-20 14:38:12.719037] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:00.793 [2024-11-20 14:38:12.719040] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:00.793 [2024-11-20 14:38:12.719045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:00.793 [2024-11-20 14:38:12.719052] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:00.793 [2024-11-20 14:38:12.719056] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:00.793 [2024-11-20 14:38:12.719059] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:00.793 [2024-11-20 14:38:12.719064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:00.793 [2024-11-20 14:38:12.719071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:00.793 [2024-11-20 14:38:12.719081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:00.793 [2024-11-20 14:38:12.719092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:00.793 [2024-11-20 14:38:12.719099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:00.793 ===================================================== 00:19:00.793 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:00.793 ===================================================== 00:19:00.793 Controller Capabilities/Features 00:19:00.793 ================================ 00:19:00.793 Vendor ID: 4e58 00:19:00.793 Subsystem Vendor ID: 4e58 00:19:00.793 Serial Number: SPDK1 00:19:00.793 Model Number: SPDK bdev Controller 00:19:00.793 Firmware Version: 25.01 00:19:00.793 Recommended Arb Burst: 6 00:19:00.793 IEEE OUI Identifier: 8d 6b 50 00:19:00.793 Multi-path I/O 00:19:00.793 May have multiple subsystem ports: Yes 00:19:00.793 May have multiple controllers: Yes 00:19:00.793 Associated with SR-IOV VF: No 00:19:00.793 Max Data Transfer Size: 131072 00:19:00.793 Max Number of Namespaces: 32 00:19:00.793 Max Number of I/O Queues: 127 00:19:00.793 NVMe Specification Version (VS): 1.3 00:19:00.793 NVMe Specification Version (Identify): 1.3 00:19:00.793 Maximum Queue Entries: 256 00:19:00.793 Contiguous Queues Required: Yes 00:19:00.793 Arbitration Mechanisms Supported 00:19:00.793 Weighted Round Robin: Not Supported 00:19:00.793 Vendor Specific: Not Supported 00:19:00.793 Reset Timeout: 15000 ms 00:19:00.793 Doorbell Stride: 4 bytes 00:19:00.793 NVM Subsystem Reset: Not Supported 00:19:00.793 Command Sets Supported 00:19:00.793 NVM Command Set: Supported 00:19:00.793 Boot Partition: Not Supported 00:19:00.793 Memory Page Size Minimum: 4096 bytes 00:19:00.793 Memory Page Size Maximum: 4096 bytes 00:19:00.793 Persistent Memory Region: Not Supported 00:19:00.793 Optional Asynchronous Events Supported 00:19:00.793 Namespace Attribute Notices: Supported 00:19:00.793 Firmware Activation Notices: Not Supported 00:19:00.793 ANA Change Notices: Not Supported 00:19:00.793 PLE Aggregate Log Change Notices: Not Supported 00:19:00.793 LBA Status Info Alert Notices: Not Supported 00:19:00.793 EGE Aggregate Log Change Notices: Not Supported 00:19:00.793 Normal NVM Subsystem Shutdown event: Not Supported 00:19:00.793 Zone Descriptor Change Notices: Not Supported 00:19:00.793 Discovery Log Change Notices: Not Supported 00:19:00.793 Controller Attributes 00:19:00.794 128-bit Host Identifier: Supported 00:19:00.794 Non-Operational Permissive Mode: Not Supported 00:19:00.794 NVM Sets: Not Supported 00:19:00.794 Read Recovery Levels: Not Supported 00:19:00.794 Endurance Groups: Not Supported 00:19:00.794 Predictable Latency Mode: Not Supported 00:19:00.794 Traffic Based Keep ALive: Not Supported 00:19:00.794 Namespace Granularity: Not Supported 00:19:00.794 SQ Associations: Not Supported 00:19:00.794 UUID List: Not Supported 00:19:00.794 Multi-Domain Subsystem: Not Supported 00:19:00.794 Fixed Capacity Management: Not Supported 00:19:00.794 Variable Capacity Management: Not Supported 00:19:00.794 Delete Endurance Group: Not Supported 00:19:00.794 Delete NVM Set: Not Supported 00:19:00.794 Extended LBA Formats Supported: Not Supported 00:19:00.794 Flexible Data Placement Supported: Not Supported 00:19:00.794 00:19:00.794 Controller Memory Buffer Support 00:19:00.794 ================================ 00:19:00.794 Supported: No 00:19:00.794 00:19:00.794 Persistent Memory Region Support 00:19:00.794 ================================ 00:19:00.794 Supported: No 00:19:00.794 00:19:00.794 Admin Command Set Attributes 00:19:00.794 ============================ 00:19:00.794 Security Send/Receive: Not Supported 00:19:00.794 Format NVM: Not Supported 00:19:00.794 Firmware Activate/Download: Not Supported 00:19:00.794 Namespace Management: Not Supported 00:19:00.794 Device Self-Test: Not Supported 00:19:00.794 Directives: Not Supported 00:19:00.794 NVMe-MI: Not Supported 00:19:00.794 Virtualization Management: Not Supported 00:19:00.794 Doorbell Buffer Config: Not Supported 00:19:00.794 Get LBA Status Capability: Not Supported 00:19:00.794 Command & Feature Lockdown Capability: Not Supported 00:19:00.794 Abort Command Limit: 4 00:19:00.794 Async Event Request Limit: 4 00:19:00.794 Number of Firmware Slots: N/A 00:19:00.794 Firmware Slot 1 Read-Only: N/A 00:19:00.794 Firmware Activation Without Reset: N/A 00:19:00.794 Multiple Update Detection Support: N/A 00:19:00.794 Firmware Update Granularity: No Information Provided 00:19:00.794 Per-Namespace SMART Log: No 00:19:00.794 Asymmetric Namespace Access Log Page: Not Supported 00:19:00.794 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:19:00.794 Command Effects Log Page: Supported 00:19:00.794 Get Log Page Extended Data: Supported 00:19:00.794 Telemetry Log Pages: Not Supported 00:19:00.794 Persistent Event Log Pages: Not Supported 00:19:00.794 Supported Log Pages Log Page: May Support 00:19:00.794 Commands Supported & Effects Log Page: Not Supported 00:19:00.794 Feature Identifiers & Effects Log Page:May Support 00:19:00.794 NVMe-MI Commands & Effects Log Page: May Support 00:19:00.794 Data Area 4 for Telemetry Log: Not Supported 00:19:00.794 Error Log Page Entries Supported: 128 00:19:00.794 Keep Alive: Supported 00:19:00.794 Keep Alive Granularity: 10000 ms 00:19:00.794 00:19:00.794 NVM Command Set Attributes 00:19:00.794 ========================== 00:19:00.794 Submission Queue Entry Size 00:19:00.794 Max: 64 00:19:00.794 Min: 64 00:19:00.794 Completion Queue Entry Size 00:19:00.794 Max: 16 00:19:00.794 Min: 16 00:19:00.794 Number of Namespaces: 32 00:19:00.794 Compare Command: Supported 00:19:00.794 Write Uncorrectable Command: Not Supported 00:19:00.794 Dataset Management Command: Supported 00:19:00.794 Write Zeroes Command: Supported 00:19:00.794 Set Features Save Field: Not Supported 00:19:00.794 Reservations: Not Supported 00:19:00.794 Timestamp: Not Supported 00:19:00.794 Copy: Supported 00:19:00.794 Volatile Write Cache: Present 00:19:00.794 Atomic Write Unit (Normal): 1 00:19:00.794 Atomic Write Unit (PFail): 1 00:19:00.794 Atomic Compare & Write Unit: 1 00:19:00.794 Fused Compare & Write: Supported 00:19:00.794 Scatter-Gather List 00:19:00.794 SGL Command Set: Supported (Dword aligned) 00:19:00.794 SGL Keyed: Not Supported 00:19:00.794 SGL Bit Bucket Descriptor: Not Supported 00:19:00.794 SGL Metadata Pointer: Not Supported 00:19:00.794 Oversized SGL: Not Supported 00:19:00.794 SGL Metadata Address: Not Supported 00:19:00.794 SGL Offset: Not Supported 00:19:00.794 Transport SGL Data Block: Not Supported 00:19:00.794 Replay Protected Memory Block: Not Supported 00:19:00.794 00:19:00.794 Firmware Slot Information 00:19:00.794 ========================= 00:19:00.794 Active slot: 1 00:19:00.794 Slot 1 Firmware Revision: 25.01 00:19:00.794 00:19:00.794 00:19:00.794 Commands Supported and Effects 00:19:00.794 ============================== 00:19:00.794 Admin Commands 00:19:00.794 -------------- 00:19:00.794 Get Log Page (02h): Supported 00:19:00.794 Identify (06h): Supported 00:19:00.794 Abort (08h): Supported 00:19:00.794 Set Features (09h): Supported 00:19:00.794 Get Features (0Ah): Supported 00:19:00.794 Asynchronous Event Request (0Ch): Supported 00:19:00.794 Keep Alive (18h): Supported 00:19:00.794 I/O Commands 00:19:00.794 ------------ 00:19:00.794 Flush (00h): Supported LBA-Change 00:19:00.794 Write (01h): Supported LBA-Change 00:19:00.794 Read (02h): Supported 00:19:00.794 Compare (05h): Supported 00:19:00.794 Write Zeroes (08h): Supported LBA-Change 00:19:00.794 Dataset Management (09h): Supported LBA-Change 00:19:00.794 Copy (19h): Supported LBA-Change 00:19:00.794 00:19:00.794 Error Log 00:19:00.794 ========= 00:19:00.794 00:19:00.794 Arbitration 00:19:00.794 =========== 00:19:00.794 Arbitration Burst: 1 00:19:00.794 00:19:00.794 Power Management 00:19:00.794 ================ 00:19:00.794 Number of Power States: 1 00:19:00.794 Current Power State: Power State #0 00:19:00.794 Power State #0: 00:19:00.794 Max Power: 0.00 W 00:19:00.794 Non-Operational State: Operational 00:19:00.794 Entry Latency: Not Reported 00:19:00.794 Exit Latency: Not Reported 00:19:00.794 Relative Read Throughput: 0 00:19:00.794 Relative Read Latency: 0 00:19:00.794 Relative Write Throughput: 0 00:19:00.794 Relative Write Latency: 0 00:19:00.794 Idle Power: Not Reported 00:19:00.794 Active Power: Not Reported 00:19:00.794 Non-Operational Permissive Mode: Not Supported 00:19:00.794 00:19:00.794 Health Information 00:19:00.794 ================== 00:19:00.794 Critical Warnings: 00:19:00.794 Available Spare Space: OK 00:19:00.794 Temperature: OK 00:19:00.794 Device Reliability: OK 00:19:00.794 Read Only: No 00:19:00.794 Volatile Memory Backup: OK 00:19:00.794 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:00.794 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:00.794 Available Spare: 0% 00:19:00.794 Available Sp[2024-11-20 14:38:12.719193] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:00.794 [2024-11-20 14:38:12.719205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:00.794 [2024-11-20 14:38:12.719231] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:19:00.794 [2024-11-20 14:38:12.719240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.794 [2024-11-20 14:38:12.719246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.794 [2024-11-20 14:38:12.719251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.795 [2024-11-20 14:38:12.719257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.795 [2024-11-20 14:38:12.722957] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:00.795 [2024-11-20 14:38:12.722970] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:19:00.795 [2024-11-20 14:38:12.723360] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:00.795 [2024-11-20 14:38:12.723408] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:19:00.795 [2024-11-20 14:38:12.723414] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:19:00.795 [2024-11-20 14:38:12.724365] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:19:00.795 [2024-11-20 14:38:12.724376] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:19:00.795 [2024-11-20 14:38:12.724424] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:19:00.795 [2024-11-20 14:38:12.726408] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:01.052 are Threshold: 0% 00:19:01.052 Life Percentage Used: 0% 00:19:01.052 Data Units Read: 0 00:19:01.052 Data Units Written: 0 00:19:01.052 Host Read Commands: 0 00:19:01.052 Host Write Commands: 0 00:19:01.052 Controller Busy Time: 0 minutes 00:19:01.052 Power Cycles: 0 00:19:01.052 Power On Hours: 0 hours 00:19:01.052 Unsafe Shutdowns: 0 00:19:01.052 Unrecoverable Media Errors: 0 00:19:01.052 Lifetime Error Log Entries: 0 00:19:01.052 Warning Temperature Time: 0 minutes 00:19:01.052 Critical Temperature Time: 0 minutes 00:19:01.052 00:19:01.052 Number of Queues 00:19:01.052 ================ 00:19:01.052 Number of I/O Submission Queues: 127 00:19:01.052 Number of I/O Completion Queues: 127 00:19:01.052 00:19:01.052 Active Namespaces 00:19:01.052 ================= 00:19:01.052 Namespace ID:1 00:19:01.052 Error Recovery Timeout: Unlimited 00:19:01.052 Command Set Identifier: NVM (00h) 00:19:01.052 Deallocate: Supported 00:19:01.052 Deallocated/Unwritten Error: Not Supported 00:19:01.052 Deallocated Read Value: Unknown 00:19:01.052 Deallocate in Write Zeroes: Not Supported 00:19:01.052 Deallocated Guard Field: 0xFFFF 00:19:01.052 Flush: Supported 00:19:01.052 Reservation: Supported 00:19:01.052 Namespace Sharing Capabilities: Multiple Controllers 00:19:01.052 Size (in LBAs): 131072 (0GiB) 00:19:01.052 Capacity (in LBAs): 131072 (0GiB) 00:19:01.052 Utilization (in LBAs): 131072 (0GiB) 00:19:01.052 NGUID: 64CF97E8442B4E48BBD2F9343D27D738 00:19:01.052 UUID: 64cf97e8-442b-4e48-bbd2-f9343d27d738 00:19:01.052 Thin Provisioning: Not Supported 00:19:01.052 Per-NS Atomic Units: Yes 00:19:01.052 Atomic Boundary Size (Normal): 0 00:19:01.052 Atomic Boundary Size (PFail): 0 00:19:01.052 Atomic Boundary Offset: 0 00:19:01.052 Maximum Single Source Range Length: 65535 00:19:01.052 Maximum Copy Length: 65535 00:19:01.052 Maximum Source Range Count: 1 00:19:01.052 NGUID/EUI64 Never Reused: No 00:19:01.052 Namespace Write Protected: No 00:19:01.052 Number of LBA Formats: 1 00:19:01.052 Current LBA Format: LBA Format #00 00:19:01.052 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:01.052 00:19:01.052 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:01.052 [2024-11-20 14:38:12.963837] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:06.316 Initializing NVMe Controllers 00:19:06.316 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:06.316 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:06.316 Initialization complete. Launching workers. 00:19:06.316 ======================================================== 00:19:06.316 Latency(us) 00:19:06.317 Device Information : IOPS MiB/s Average min max 00:19:06.317 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39959.65 156.09 3203.07 997.52 8165.44 00:19:06.317 ======================================================== 00:19:06.317 Total : 39959.65 156.09 3203.07 997.52 8165.44 00:19:06.317 00:19:06.317 [2024-11-20 14:38:17.981025] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:06.317 14:38:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:06.317 [2024-11-20 14:38:18.217108] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:11.671 Initializing NVMe Controllers 00:19:11.671 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:11.671 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:11.671 Initialization complete. Launching workers. 00:19:11.671 ======================================================== 00:19:11.671 Latency(us) 00:19:11.671 Device Information : IOPS MiB/s Average min max 00:19:11.671 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16060.63 62.74 7975.12 5984.25 8980.45 00:19:11.671 ======================================================== 00:19:11.671 Total : 16060.63 62.74 7975.12 5984.25 8980.45 00:19:11.671 00:19:11.671 [2024-11-20 14:38:23.258782] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:11.671 14:38:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:11.671 [2024-11-20 14:38:23.470783] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:16.930 [2024-11-20 14:38:28.547311] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:16.930 Initializing NVMe Controllers 00:19:16.930 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:16.930 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:16.930 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:19:16.930 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:19:16.930 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:19:16.930 Initialization complete. Launching workers. 00:19:16.930 Starting thread on core 2 00:19:16.930 Starting thread on core 3 00:19:16.930 Starting thread on core 1 00:19:16.930 14:38:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:19:16.930 [2024-11-20 14:38:28.840314] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:20.213 [2024-11-20 14:38:31.905540] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:20.213 Initializing NVMe Controllers 00:19:20.213 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:20.213 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:20.213 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:20.213 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:20.213 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:20.213 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:20.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:20.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:20.213 Initialization complete. Launching workers. 00:19:20.213 Starting thread on core 1 with urgent priority queue 00:19:20.213 Starting thread on core 2 with urgent priority queue 00:19:20.213 Starting thread on core 3 with urgent priority queue 00:19:20.213 Starting thread on core 0 with urgent priority queue 00:19:20.213 SPDK bdev Controller (SPDK1 ) core 0: 1920.67 IO/s 52.07 secs/100000 ios 00:19:20.213 SPDK bdev Controller (SPDK1 ) core 1: 1967.67 IO/s 50.82 secs/100000 ios 00:19:20.213 SPDK bdev Controller (SPDK1 ) core 2: 2178.33 IO/s 45.91 secs/100000 ios 00:19:20.213 SPDK bdev Controller (SPDK1 ) core 3: 2479.67 IO/s 40.33 secs/100000 ios 00:19:20.213 ======================================================== 00:19:20.213 00:19:20.213 14:38:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:20.470 [2024-11-20 14:38:32.205564] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:20.470 Initializing NVMe Controllers 00:19:20.470 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:20.470 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:20.470 Namespace ID: 1 size: 0GB 00:19:20.470 Initialization complete. 00:19:20.470 INFO: using host memory buffer for IO 00:19:20.470 Hello world! 00:19:20.470 [2024-11-20 14:38:32.239812] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:20.470 14:38:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:20.728 [2024-11-20 14:38:32.526361] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:21.660 Initializing NVMe Controllers 00:19:21.661 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:21.661 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:21.661 Initialization complete. Launching workers. 00:19:21.661 submit (in ns) avg, min, max = 7468.1, 3277.4, 3999727.0 00:19:21.661 complete (in ns) avg, min, max = 21200.5, 1779.1, 5991188.7 00:19:21.661 00:19:21.661 Submit histogram 00:19:21.661 ================ 00:19:21.661 Range in us Cumulative Count 00:19:21.661 3.270 - 3.283: 0.0124% ( 2) 00:19:21.661 3.297 - 3.311: 0.0372% ( 4) 00:19:21.661 3.311 - 3.325: 0.1239% ( 14) 00:19:21.661 3.325 - 3.339: 0.2788% ( 25) 00:19:21.661 3.339 - 3.353: 0.9789% ( 113) 00:19:21.661 3.353 - 3.367: 4.2315% ( 525) 00:19:21.661 3.367 - 3.381: 9.3055% ( 819) 00:19:21.661 3.381 - 3.395: 15.3646% ( 978) 00:19:21.661 3.395 - 3.409: 21.7149% ( 1025) 00:19:21.661 3.409 - 3.423: 28.5918% ( 1110) 00:19:21.661 3.423 - 3.437: 33.9446% ( 864) 00:19:21.661 3.437 - 3.450: 39.4957% ( 896) 00:19:21.661 3.450 - 3.464: 45.1521% ( 913) 00:19:21.661 3.464 - 3.478: 49.5261% ( 706) 00:19:21.661 3.478 - 3.492: 53.4787% ( 638) 00:19:21.661 3.492 - 3.506: 58.8068% ( 860) 00:19:21.661 3.506 - 3.520: 65.8385% ( 1135) 00:19:21.661 3.520 - 3.534: 70.2868% ( 718) 00:19:21.661 3.534 - 3.548: 74.6732% ( 708) 00:19:21.661 3.548 - 3.562: 79.8216% ( 831) 00:19:21.661 3.562 - 3.590: 85.7754% ( 961) 00:19:21.661 3.590 - 3.617: 87.6711% ( 306) 00:19:21.661 3.617 - 3.645: 88.1606% ( 79) 00:19:21.661 3.645 - 3.673: 89.2138% ( 170) 00:19:21.661 3.673 - 3.701: 91.0043% ( 289) 00:19:21.661 3.701 - 3.729: 92.8133% ( 292) 00:19:21.661 3.729 - 3.757: 94.3250% ( 244) 00:19:21.661 3.757 - 3.784: 95.8615% ( 248) 00:19:21.661 3.784 - 3.812: 97.2245% ( 220) 00:19:21.661 3.812 - 3.840: 98.3025% ( 174) 00:19:21.661 3.840 - 3.868: 98.8972% ( 96) 00:19:21.661 3.868 - 3.896: 99.2813% ( 62) 00:19:21.661 3.896 - 3.923: 99.4920% ( 34) 00:19:21.661 3.923 - 3.951: 99.5353% ( 7) 00:19:21.661 3.951 - 3.979: 99.5601% ( 4) 00:19:21.661 3.979 - 4.007: 99.5663% ( 1) 00:19:21.661 4.007 - 4.035: 99.5725% ( 1) 00:19:21.661 4.035 - 4.063: 99.5787% ( 1) 00:19:21.661 4.063 - 4.090: 99.5973% ( 3) 00:19:21.661 4.090 - 4.118: 99.6035% ( 1) 00:19:21.661 4.146 - 4.174: 99.6097% ( 1) 00:19:21.661 4.174 - 4.202: 99.6159% ( 1) 00:19:21.661 4.230 - 4.257: 99.6221% ( 1) 00:19:21.661 5.064 - 5.092: 99.6283% ( 1) 00:19:21.661 5.148 - 5.176: 99.6345% ( 1) 00:19:21.661 5.231 - 5.259: 99.6407% ( 1) 00:19:21.661 5.315 - 5.343: 99.6531% ( 2) 00:19:21.661 5.343 - 5.370: 99.6593% ( 1) 00:19:21.661 5.398 - 5.426: 99.6654% ( 1) 00:19:21.661 5.426 - 5.454: 99.6716% ( 1) 00:19:21.661 5.454 - 5.482: 99.6778% ( 1) 00:19:21.661 5.510 - 5.537: 99.6840% ( 1) 00:19:21.661 5.537 - 5.565: 99.6964% ( 2) 00:19:21.661 5.565 - 5.593: 99.7026% ( 1) 00:19:21.661 5.593 - 5.621: 99.7150% ( 2) 00:19:21.661 5.649 - 5.677: 99.7336% ( 3) 00:19:21.661 5.704 - 5.732: 99.7398% ( 1) 00:19:21.661 5.760 - 5.788: 99.7522% ( 2) 00:19:21.661 5.788 - 5.816: 99.7584% ( 1) 00:19:21.661 5.843 - 5.871: 99.7708% ( 2) 00:19:21.661 5.871 - 5.899: 99.7770% ( 1) 00:19:21.661 5.899 - 5.927: 99.7956% ( 3) 00:19:21.661 5.955 - 5.983: 99.8017% ( 1) 00:19:21.661 5.983 - 6.010: 99.8079% ( 1) 00:19:21.661 6.066 - 6.094: 99.8141% ( 1) 00:19:21.661 6.150 - 6.177: 99.8203% ( 1) 00:19:21.661 6.177 - 6.205: 99.8265% ( 1) 00:19:21.661 6.289 - 6.317: 99.8327% ( 1) 00:19:21.661 6.372 - 6.400: 99.8389% ( 1) 00:19:21.661 6.428 - 6.456: 99.8451% ( 1) 00:19:21.661 6.511 - 6.539: 99.8575% ( 2) 00:19:21.661 6.623 - 6.650: 99.8637% ( 1) 00:19:21.661 6.650 - 6.678: 99.8699% ( 1) 00:19:21.661 7.123 - 7.179: 99.8761% ( 1) 00:19:21.661 7.179 - 7.235: 99.8885% ( 2) 00:19:21.661 8.960 - 9.016: 99.8947% ( 1) 00:19:21.661 41.183 - 41.405: 99.9009% ( 1) 00:19:21.661 3989.148 - 4017.642: 100.0000% ( 16) 00:19:21.661 00:19:21.661 Complete histogram 00:19:21.661 ================== 00:19:21.661 Ra[2024-11-20 14:38:33.548231] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:21.661 nge in us Cumulative Count 00:19:21.661 1.774 - 1.781: 0.0062% ( 1) 00:19:21.661 1.809 - 1.823: 0.0805% ( 12) 00:19:21.661 1.823 - 1.837: 0.8488% ( 124) 00:19:21.661 1.837 - 1.850: 2.2799% ( 231) 00:19:21.661 1.850 - 1.864: 3.2092% ( 150) 00:19:21.661 1.864 - 1.878: 23.5735% ( 3287) 00:19:21.661 1.878 - 1.892: 76.7363% ( 8581) 00:19:21.661 1.892 - 1.906: 89.1457% ( 2003) 00:19:21.661 1.906 - 1.920: 93.7550% ( 744) 00:19:21.661 1.920 - 1.934: 94.8083% ( 170) 00:19:21.661 1.934 - 1.948: 95.7685% ( 155) 00:19:21.661 1.948 - 1.962: 97.7387% ( 318) 00:19:21.661 1.962 - 1.976: 98.7919% ( 170) 00:19:21.661 1.976 - 1.990: 99.1017% ( 50) 00:19:21.661 1.990 - 2.003: 99.1822% ( 13) 00:19:21.661 2.003 - 2.017: 99.2070% ( 4) 00:19:21.661 2.017 - 2.031: 99.2194% ( 2) 00:19:21.661 2.031 - 2.045: 99.2256% ( 1) 00:19:21.661 2.059 - 2.073: 99.2318% ( 1) 00:19:21.661 2.073 - 2.087: 99.2380% ( 1) 00:19:21.661 2.087 - 2.101: 99.2442% ( 1) 00:19:21.661 2.101 - 2.115: 99.2627% ( 3) 00:19:21.661 2.115 - 2.129: 99.2689% ( 1) 00:19:21.661 2.129 - 2.143: 99.2751% ( 1) 00:19:21.661 2.143 - 2.157: 99.2875% ( 2) 00:19:21.661 2.157 - 2.170: 99.2937% ( 1) 00:19:21.661 2.170 - 2.184: 99.3061% ( 2) 00:19:21.661 2.198 - 2.212: 99.3123% ( 1) 00:19:21.661 2.212 - 2.226: 99.3185% ( 1) 00:19:21.661 2.240 - 2.254: 99.3309% ( 2) 00:19:21.661 2.254 - 2.268: 99.3371% ( 1) 00:19:21.661 2.282 - 2.296: 99.3433% ( 1) 00:19:21.661 2.379 - 2.393: 99.3495% ( 1) 00:19:21.661 3.729 - 3.757: 99.3557% ( 1) 00:19:21.661 3.979 - 4.007: 99.3619% ( 1) 00:19:21.661 4.007 - 4.035: 99.3681% ( 1) 00:19:21.661 4.035 - 4.063: 99.3743% ( 1) 00:19:21.661 4.090 - 4.118: 99.3929% ( 3) 00:19:21.661 4.257 - 4.285: 99.4052% ( 2) 00:19:21.661 4.397 - 4.424: 99.4114% ( 1) 00:19:21.661 4.480 - 4.508: 99.4176% ( 1) 00:19:21.662 4.508 - 4.536: 99.4238% ( 1) 00:19:21.662 4.647 - 4.675: 99.4300% ( 1) 00:19:21.662 4.675 - 4.703: 99.4424% ( 2) 00:19:21.662 4.758 - 4.786: 99.4486% ( 1) 00:19:21.662 4.897 - 4.925: 99.4548% ( 1) 00:19:21.662 5.037 - 5.064: 99.4610% ( 1) 00:19:21.662 5.120 - 5.148: 99.4672% ( 1) 00:19:21.662 5.454 - 5.482: 99.4734% ( 1) 00:19:21.662 5.510 - 5.537: 99.4796% ( 1) 00:19:21.662 5.565 - 5.593: 99.4858% ( 1) 00:19:21.662 5.732 - 5.760: 99.4920% ( 1) 00:19:21.662 5.760 - 5.788: 99.4982% ( 1) 00:19:21.662 5.871 - 5.899: 99.5044% ( 1) 00:19:21.662 7.012 - 7.040: 99.5106% ( 1) 00:19:21.662 146.922 - 147.812: 99.5168% ( 1) 00:19:21.662 1994.574 - 2008.821: 99.5230% ( 1) 00:19:21.662 3989.148 - 4017.642: 99.9938% ( 76) 00:19:21.662 5983.722 - 6012.216: 100.0000% ( 1) 00:19:21.662 00:19:21.662 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:21.662 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:21.662 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:21.662 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:21.662 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:21.919 [ 00:19:21.919 { 00:19:21.919 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:21.919 "subtype": "Discovery", 00:19:21.919 "listen_addresses": [], 00:19:21.919 "allow_any_host": true, 00:19:21.919 "hosts": [] 00:19:21.919 }, 00:19:21.919 { 00:19:21.919 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:21.919 "subtype": "NVMe", 00:19:21.919 "listen_addresses": [ 00:19:21.919 { 00:19:21.919 "trtype": "VFIOUSER", 00:19:21.919 "adrfam": "IPv4", 00:19:21.919 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:21.919 "trsvcid": "0" 00:19:21.919 } 00:19:21.919 ], 00:19:21.919 "allow_any_host": true, 00:19:21.919 "hosts": [], 00:19:21.919 "serial_number": "SPDK1", 00:19:21.919 "model_number": "SPDK bdev Controller", 00:19:21.919 "max_namespaces": 32, 00:19:21.919 "min_cntlid": 1, 00:19:21.919 "max_cntlid": 65519, 00:19:21.919 "namespaces": [ 00:19:21.919 { 00:19:21.919 "nsid": 1, 00:19:21.919 "bdev_name": "Malloc1", 00:19:21.919 "name": "Malloc1", 00:19:21.919 "nguid": "64CF97E8442B4E48BBD2F9343D27D738", 00:19:21.919 "uuid": "64cf97e8-442b-4e48-bbd2-f9343d27d738" 00:19:21.919 } 00:19:21.919 ] 00:19:21.919 }, 00:19:21.919 { 00:19:21.919 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:21.919 "subtype": "NVMe", 00:19:21.919 "listen_addresses": [ 00:19:21.919 { 00:19:21.919 "trtype": "VFIOUSER", 00:19:21.919 "adrfam": "IPv4", 00:19:21.919 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:21.919 "trsvcid": "0" 00:19:21.919 } 00:19:21.919 ], 00:19:21.919 "allow_any_host": true, 00:19:21.919 "hosts": [], 00:19:21.919 "serial_number": "SPDK2", 00:19:21.919 "model_number": "SPDK bdev Controller", 00:19:21.919 "max_namespaces": 32, 00:19:21.919 "min_cntlid": 1, 00:19:21.919 "max_cntlid": 65519, 00:19:21.919 "namespaces": [ 00:19:21.920 { 00:19:21.920 "nsid": 1, 00:19:21.920 "bdev_name": "Malloc2", 00:19:21.920 "name": "Malloc2", 00:19:21.920 "nguid": "9F7CE8B9F37E42F7898E0890A4B81770", 00:19:21.920 "uuid": "9f7ce8b9-f37e-42f7-898e-0890a4b81770" 00:19:21.920 } 00:19:21.920 ] 00:19:21.920 } 00:19:21.920 ] 00:19:21.920 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:21.920 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1560574 00:19:21.920 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:21.920 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:21.920 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:19:21.920 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:21.920 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:21.920 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:21.920 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:21.920 14:38:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:22.177 [2024-11-20 14:38:33.969389] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:22.177 Malloc3 00:19:22.177 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:22.435 [2024-11-20 14:38:34.204174] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:22.435 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:22.435 Asynchronous Event Request test 00:19:22.435 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:22.435 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:22.435 Registering asynchronous event callbacks... 00:19:22.435 Starting namespace attribute notice tests for all controllers... 00:19:22.435 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:22.435 aer_cb - Changed Namespace 00:19:22.435 Cleaning up... 00:19:22.694 [ 00:19:22.694 { 00:19:22.694 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:22.694 "subtype": "Discovery", 00:19:22.694 "listen_addresses": [], 00:19:22.694 "allow_any_host": true, 00:19:22.694 "hosts": [] 00:19:22.694 }, 00:19:22.694 { 00:19:22.694 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:22.694 "subtype": "NVMe", 00:19:22.694 "listen_addresses": [ 00:19:22.694 { 00:19:22.694 "trtype": "VFIOUSER", 00:19:22.694 "adrfam": "IPv4", 00:19:22.694 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:22.694 "trsvcid": "0" 00:19:22.694 } 00:19:22.694 ], 00:19:22.694 "allow_any_host": true, 00:19:22.694 "hosts": [], 00:19:22.694 "serial_number": "SPDK1", 00:19:22.694 "model_number": "SPDK bdev Controller", 00:19:22.694 "max_namespaces": 32, 00:19:22.694 "min_cntlid": 1, 00:19:22.694 "max_cntlid": 65519, 00:19:22.694 "namespaces": [ 00:19:22.694 { 00:19:22.694 "nsid": 1, 00:19:22.694 "bdev_name": "Malloc1", 00:19:22.694 "name": "Malloc1", 00:19:22.694 "nguid": "64CF97E8442B4E48BBD2F9343D27D738", 00:19:22.694 "uuid": "64cf97e8-442b-4e48-bbd2-f9343d27d738" 00:19:22.694 }, 00:19:22.694 { 00:19:22.694 "nsid": 2, 00:19:22.694 "bdev_name": "Malloc3", 00:19:22.694 "name": "Malloc3", 00:19:22.694 "nguid": "65E540FA0575413E85DDAC08E23C2D69", 00:19:22.694 "uuid": "65e540fa-0575-413e-85dd-ac08e23c2d69" 00:19:22.694 } 00:19:22.694 ] 00:19:22.694 }, 00:19:22.694 { 00:19:22.694 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:22.694 "subtype": "NVMe", 00:19:22.694 "listen_addresses": [ 00:19:22.694 { 00:19:22.694 "trtype": "VFIOUSER", 00:19:22.694 "adrfam": "IPv4", 00:19:22.694 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:22.694 "trsvcid": "0" 00:19:22.694 } 00:19:22.694 ], 00:19:22.694 "allow_any_host": true, 00:19:22.694 "hosts": [], 00:19:22.694 "serial_number": "SPDK2", 00:19:22.694 "model_number": "SPDK bdev Controller", 00:19:22.694 "max_namespaces": 32, 00:19:22.694 "min_cntlid": 1, 00:19:22.694 "max_cntlid": 65519, 00:19:22.694 "namespaces": [ 00:19:22.694 { 00:19:22.694 "nsid": 1, 00:19:22.694 "bdev_name": "Malloc2", 00:19:22.694 "name": "Malloc2", 00:19:22.694 "nguid": "9F7CE8B9F37E42F7898E0890A4B81770", 00:19:22.694 "uuid": "9f7ce8b9-f37e-42f7-898e-0890a4b81770" 00:19:22.694 } 00:19:22.694 ] 00:19:22.694 } 00:19:22.694 ] 00:19:22.694 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1560574 00:19:22.694 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:22.694 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:22.694 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:22.694 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:22.694 [2024-11-20 14:38:34.447703] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:19:22.694 [2024-11-20 14:38:34.447751] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1560669 ] 00:19:22.694 [2024-11-20 14:38:34.488748] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:22.694 [2024-11-20 14:38:34.492991] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:22.694 [2024-11-20 14:38:34.493016] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f14eb93c000 00:19:22.694 [2024-11-20 14:38:34.493992] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:22.694 [2024-11-20 14:38:34.494995] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:22.694 [2024-11-20 14:38:34.496002] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:22.694 [2024-11-20 14:38:34.497012] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:22.694 [2024-11-20 14:38:34.498012] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:22.694 [2024-11-20 14:38:34.499024] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:22.694 [2024-11-20 14:38:34.500028] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:22.694 [2024-11-20 14:38:34.501040] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:22.695 [2024-11-20 14:38:34.502044] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:22.695 [2024-11-20 14:38:34.502054] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f14eb931000 00:19:22.695 [2024-11-20 14:38:34.502995] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:22.695 [2024-11-20 14:38:34.512517] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:22.695 [2024-11-20 14:38:34.512542] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:19:22.695 [2024-11-20 14:38:34.517617] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:22.695 [2024-11-20 14:38:34.517655] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:22.695 [2024-11-20 14:38:34.517721] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:19:22.695 [2024-11-20 14:38:34.517734] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:19:22.695 [2024-11-20 14:38:34.517739] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:19:22.695 [2024-11-20 14:38:34.518620] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:22.695 [2024-11-20 14:38:34.518631] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:19:22.695 [2024-11-20 14:38:34.518638] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:19:22.695 [2024-11-20 14:38:34.519622] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:22.695 [2024-11-20 14:38:34.519631] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:19:22.695 [2024-11-20 14:38:34.519638] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:19:22.695 [2024-11-20 14:38:34.520631] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:22.695 [2024-11-20 14:38:34.520640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:22.695 [2024-11-20 14:38:34.521639] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:22.695 [2024-11-20 14:38:34.521648] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:19:22.695 [2024-11-20 14:38:34.521652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:19:22.695 [2024-11-20 14:38:34.521659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:22.695 [2024-11-20 14:38:34.521766] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:19:22.695 [2024-11-20 14:38:34.521771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:22.695 [2024-11-20 14:38:34.521775] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:22.695 [2024-11-20 14:38:34.522643] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:22.695 [2024-11-20 14:38:34.523646] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:22.695 [2024-11-20 14:38:34.524650] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:22.695 [2024-11-20 14:38:34.525654] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:22.695 [2024-11-20 14:38:34.525692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:22.695 [2024-11-20 14:38:34.526666] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:22.695 [2024-11-20 14:38:34.526675] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:22.695 [2024-11-20 14:38:34.526679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:19:22.695 [2024-11-20 14:38:34.526696] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:19:22.695 [2024-11-20 14:38:34.526706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:19:22.695 [2024-11-20 14:38:34.526719] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:22.695 [2024-11-20 14:38:34.526724] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:22.695 [2024-11-20 14:38:34.526728] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:22.695 [2024-11-20 14:38:34.526739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:22.695 [2024-11-20 14:38:34.533955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:22.695 [2024-11-20 14:38:34.533967] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:19:22.695 [2024-11-20 14:38:34.533972] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:19:22.695 [2024-11-20 14:38:34.533976] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:19:22.695 [2024-11-20 14:38:34.533981] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:22.695 [2024-11-20 14:38:34.533988] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:19:22.695 [2024-11-20 14:38:34.533992] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:19:22.695 [2024-11-20 14:38:34.533997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:19:22.695 [2024-11-20 14:38:34.534004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:19:22.695 [2024-11-20 14:38:34.534014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:22.695 [2024-11-20 14:38:34.541951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:22.695 [2024-11-20 14:38:34.541964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.695 [2024-11-20 14:38:34.541972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.695 [2024-11-20 14:38:34.541979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.695 [2024-11-20 14:38:34.541987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.695 [2024-11-20 14:38:34.541991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:19:22.695 [2024-11-20 14:38:34.541998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:22.695 [2024-11-20 14:38:34.542006] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:22.695 [2024-11-20 14:38:34.549952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:22.695 [2024-11-20 14:38:34.549962] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:19:22.695 [2024-11-20 14:38:34.549968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:22.696 [2024-11-20 14:38:34.549976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:19:22.696 [2024-11-20 14:38:34.549982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:19:22.696 [2024-11-20 14:38:34.549990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:22.696 [2024-11-20 14:38:34.557962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:22.696 [2024-11-20 14:38:34.558018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:19:22.696 [2024-11-20 14:38:34.558026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:19:22.696 [2024-11-20 14:38:34.558033] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:22.696 [2024-11-20 14:38:34.558037] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:22.696 [2024-11-20 14:38:34.558041] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:22.696 [2024-11-20 14:38:34.558047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:22.696 [2024-11-20 14:38:34.565952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:22.696 [2024-11-20 14:38:34.565962] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:19:22.696 [2024-11-20 14:38:34.565973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:19:22.696 [2024-11-20 14:38:34.565980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:19:22.696 [2024-11-20 14:38:34.565986] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:22.696 [2024-11-20 14:38:34.565990] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:22.696 [2024-11-20 14:38:34.565993] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:22.696 [2024-11-20 14:38:34.565999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:22.696 [2024-11-20 14:38:34.573953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:22.696 [2024-11-20 14:38:34.573967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:22.696 [2024-11-20 14:38:34.573974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:22.696 [2024-11-20 14:38:34.573981] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:22.696 [2024-11-20 14:38:34.573985] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:22.696 [2024-11-20 14:38:34.573988] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:22.696 [2024-11-20 14:38:34.573994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:22.696 [2024-11-20 14:38:34.581951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:22.696 [2024-11-20 14:38:34.581960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:22.696 [2024-11-20 14:38:34.581969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:19:22.696 [2024-11-20 14:38:34.581977] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:19:22.696 [2024-11-20 14:38:34.581982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:19:22.696 [2024-11-20 14:38:34.581986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:22.696 [2024-11-20 14:38:34.581991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:19:22.696 [2024-11-20 14:38:34.581995] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:19:22.696 [2024-11-20 14:38:34.581999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:19:22.696 [2024-11-20 14:38:34.582004] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:19:22.696 [2024-11-20 14:38:34.582020] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:22.696 [2024-11-20 14:38:34.589952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:22.696 [2024-11-20 14:38:34.589965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:22.696 [2024-11-20 14:38:34.597951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:22.696 [2024-11-20 14:38:34.597963] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:22.696 [2024-11-20 14:38:34.605952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:22.696 [2024-11-20 14:38:34.605964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:22.696 [2024-11-20 14:38:34.613951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:22.696 [2024-11-20 14:38:34.613966] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:22.696 [2024-11-20 14:38:34.613971] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:22.696 [2024-11-20 14:38:34.613974] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:22.696 [2024-11-20 14:38:34.613977] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:22.696 [2024-11-20 14:38:34.613980] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:22.696 [2024-11-20 14:38:34.613986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:22.696 [2024-11-20 14:38:34.613992] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:22.696 [2024-11-20 14:38:34.613997] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:22.696 [2024-11-20 14:38:34.614000] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:22.696 [2024-11-20 14:38:34.614005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:22.696 [2024-11-20 14:38:34.614013] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:22.696 [2024-11-20 14:38:34.614017] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:22.696 [2024-11-20 14:38:34.614020] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:22.696 [2024-11-20 14:38:34.614026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:22.696 [2024-11-20 14:38:34.614032] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:22.696 [2024-11-20 14:38:34.614036] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:22.696 [2024-11-20 14:38:34.614039] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:22.696 [2024-11-20 14:38:34.614044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:22.696 [2024-11-20 14:38:34.621951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:22.696 [2024-11-20 14:38:34.621965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:22.696 [2024-11-20 14:38:34.621974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:22.696 [2024-11-20 14:38:34.621981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:22.696 ===================================================== 00:19:22.696 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:22.697 ===================================================== 00:19:22.697 Controller Capabilities/Features 00:19:22.697 ================================ 00:19:22.697 Vendor ID: 4e58 00:19:22.697 Subsystem Vendor ID: 4e58 00:19:22.697 Serial Number: SPDK2 00:19:22.697 Model Number: SPDK bdev Controller 00:19:22.697 Firmware Version: 25.01 00:19:22.697 Recommended Arb Burst: 6 00:19:22.697 IEEE OUI Identifier: 8d 6b 50 00:19:22.697 Multi-path I/O 00:19:22.697 May have multiple subsystem ports: Yes 00:19:22.697 May have multiple controllers: Yes 00:19:22.697 Associated with SR-IOV VF: No 00:19:22.697 Max Data Transfer Size: 131072 00:19:22.697 Max Number of Namespaces: 32 00:19:22.697 Max Number of I/O Queues: 127 00:19:22.697 NVMe Specification Version (VS): 1.3 00:19:22.697 NVMe Specification Version (Identify): 1.3 00:19:22.697 Maximum Queue Entries: 256 00:19:22.697 Contiguous Queues Required: Yes 00:19:22.697 Arbitration Mechanisms Supported 00:19:22.697 Weighted Round Robin: Not Supported 00:19:22.697 Vendor Specific: Not Supported 00:19:22.697 Reset Timeout: 15000 ms 00:19:22.697 Doorbell Stride: 4 bytes 00:19:22.697 NVM Subsystem Reset: Not Supported 00:19:22.697 Command Sets Supported 00:19:22.697 NVM Command Set: Supported 00:19:22.697 Boot Partition: Not Supported 00:19:22.697 Memory Page Size Minimum: 4096 bytes 00:19:22.697 Memory Page Size Maximum: 4096 bytes 00:19:22.697 Persistent Memory Region: Not Supported 00:19:22.697 Optional Asynchronous Events Supported 00:19:22.697 Namespace Attribute Notices: Supported 00:19:22.697 Firmware Activation Notices: Not Supported 00:19:22.697 ANA Change Notices: Not Supported 00:19:22.697 PLE Aggregate Log Change Notices: Not Supported 00:19:22.697 LBA Status Info Alert Notices: Not Supported 00:19:22.697 EGE Aggregate Log Change Notices: Not Supported 00:19:22.697 Normal NVM Subsystem Shutdown event: Not Supported 00:19:22.697 Zone Descriptor Change Notices: Not Supported 00:19:22.697 Discovery Log Change Notices: Not Supported 00:19:22.697 Controller Attributes 00:19:22.697 128-bit Host Identifier: Supported 00:19:22.697 Non-Operational Permissive Mode: Not Supported 00:19:22.697 NVM Sets: Not Supported 00:19:22.697 Read Recovery Levels: Not Supported 00:19:22.697 Endurance Groups: Not Supported 00:19:22.697 Predictable Latency Mode: Not Supported 00:19:22.697 Traffic Based Keep ALive: Not Supported 00:19:22.697 Namespace Granularity: Not Supported 00:19:22.697 SQ Associations: Not Supported 00:19:22.697 UUID List: Not Supported 00:19:22.697 Multi-Domain Subsystem: Not Supported 00:19:22.697 Fixed Capacity Management: Not Supported 00:19:22.697 Variable Capacity Management: Not Supported 00:19:22.697 Delete Endurance Group: Not Supported 00:19:22.697 Delete NVM Set: Not Supported 00:19:22.697 Extended LBA Formats Supported: Not Supported 00:19:22.697 Flexible Data Placement Supported: Not Supported 00:19:22.697 00:19:22.697 Controller Memory Buffer Support 00:19:22.697 ================================ 00:19:22.697 Supported: No 00:19:22.697 00:19:22.697 Persistent Memory Region Support 00:19:22.697 ================================ 00:19:22.697 Supported: No 00:19:22.697 00:19:22.697 Admin Command Set Attributes 00:19:22.697 ============================ 00:19:22.697 Security Send/Receive: Not Supported 00:19:22.697 Format NVM: Not Supported 00:19:22.697 Firmware Activate/Download: Not Supported 00:19:22.697 Namespace Management: Not Supported 00:19:22.697 Device Self-Test: Not Supported 00:19:22.697 Directives: Not Supported 00:19:22.697 NVMe-MI: Not Supported 00:19:22.697 Virtualization Management: Not Supported 00:19:22.697 Doorbell Buffer Config: Not Supported 00:19:22.697 Get LBA Status Capability: Not Supported 00:19:22.697 Command & Feature Lockdown Capability: Not Supported 00:19:22.697 Abort Command Limit: 4 00:19:22.697 Async Event Request Limit: 4 00:19:22.697 Number of Firmware Slots: N/A 00:19:22.697 Firmware Slot 1 Read-Only: N/A 00:19:22.697 Firmware Activation Without Reset: N/A 00:19:22.697 Multiple Update Detection Support: N/A 00:19:22.697 Firmware Update Granularity: No Information Provided 00:19:22.697 Per-Namespace SMART Log: No 00:19:22.697 Asymmetric Namespace Access Log Page: Not Supported 00:19:22.697 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:22.697 Command Effects Log Page: Supported 00:19:22.697 Get Log Page Extended Data: Supported 00:19:22.697 Telemetry Log Pages: Not Supported 00:19:22.697 Persistent Event Log Pages: Not Supported 00:19:22.697 Supported Log Pages Log Page: May Support 00:19:22.697 Commands Supported & Effects Log Page: Not Supported 00:19:22.697 Feature Identifiers & Effects Log Page:May Support 00:19:22.697 NVMe-MI Commands & Effects Log Page: May Support 00:19:22.697 Data Area 4 for Telemetry Log: Not Supported 00:19:22.697 Error Log Page Entries Supported: 128 00:19:22.697 Keep Alive: Supported 00:19:22.697 Keep Alive Granularity: 10000 ms 00:19:22.697 00:19:22.697 NVM Command Set Attributes 00:19:22.697 ========================== 00:19:22.697 Submission Queue Entry Size 00:19:22.697 Max: 64 00:19:22.697 Min: 64 00:19:22.697 Completion Queue Entry Size 00:19:22.697 Max: 16 00:19:22.697 Min: 16 00:19:22.697 Number of Namespaces: 32 00:19:22.697 Compare Command: Supported 00:19:22.697 Write Uncorrectable Command: Not Supported 00:19:22.697 Dataset Management Command: Supported 00:19:22.697 Write Zeroes Command: Supported 00:19:22.697 Set Features Save Field: Not Supported 00:19:22.697 Reservations: Not Supported 00:19:22.697 Timestamp: Not Supported 00:19:22.697 Copy: Supported 00:19:22.697 Volatile Write Cache: Present 00:19:22.697 Atomic Write Unit (Normal): 1 00:19:22.697 Atomic Write Unit (PFail): 1 00:19:22.697 Atomic Compare & Write Unit: 1 00:19:22.697 Fused Compare & Write: Supported 00:19:22.697 Scatter-Gather List 00:19:22.697 SGL Command Set: Supported (Dword aligned) 00:19:22.697 SGL Keyed: Not Supported 00:19:22.697 SGL Bit Bucket Descriptor: Not Supported 00:19:22.697 SGL Metadata Pointer: Not Supported 00:19:22.697 Oversized SGL: Not Supported 00:19:22.697 SGL Metadata Address: Not Supported 00:19:22.697 SGL Offset: Not Supported 00:19:22.697 Transport SGL Data Block: Not Supported 00:19:22.697 Replay Protected Memory Block: Not Supported 00:19:22.697 00:19:22.697 Firmware Slot Information 00:19:22.697 ========================= 00:19:22.697 Active slot: 1 00:19:22.697 Slot 1 Firmware Revision: 25.01 00:19:22.697 00:19:22.697 00:19:22.697 Commands Supported and Effects 00:19:22.697 ============================== 00:19:22.697 Admin Commands 00:19:22.697 -------------- 00:19:22.697 Get Log Page (02h): Supported 00:19:22.697 Identify (06h): Supported 00:19:22.697 Abort (08h): Supported 00:19:22.697 Set Features (09h): Supported 00:19:22.697 Get Features (0Ah): Supported 00:19:22.697 Asynchronous Event Request (0Ch): Supported 00:19:22.697 Keep Alive (18h): Supported 00:19:22.697 I/O Commands 00:19:22.697 ------------ 00:19:22.697 Flush (00h): Supported LBA-Change 00:19:22.697 Write (01h): Supported LBA-Change 00:19:22.697 Read (02h): Supported 00:19:22.697 Compare (05h): Supported 00:19:22.697 Write Zeroes (08h): Supported LBA-Change 00:19:22.697 Dataset Management (09h): Supported LBA-Change 00:19:22.697 Copy (19h): Supported LBA-Change 00:19:22.697 00:19:22.697 Error Log 00:19:22.697 ========= 00:19:22.697 00:19:22.697 Arbitration 00:19:22.697 =========== 00:19:22.697 Arbitration Burst: 1 00:19:22.698 00:19:22.698 Power Management 00:19:22.698 ================ 00:19:22.698 Number of Power States: 1 00:19:22.698 Current Power State: Power State #0 00:19:22.698 Power State #0: 00:19:22.698 Max Power: 0.00 W 00:19:22.698 Non-Operational State: Operational 00:19:22.698 Entry Latency: Not Reported 00:19:22.698 Exit Latency: Not Reported 00:19:22.698 Relative Read Throughput: 0 00:19:22.698 Relative Read Latency: 0 00:19:22.698 Relative Write Throughput: 0 00:19:22.698 Relative Write Latency: 0 00:19:22.698 Idle Power: Not Reported 00:19:22.698 Active Power: Not Reported 00:19:22.698 Non-Operational Permissive Mode: Not Supported 00:19:22.698 00:19:22.698 Health Information 00:19:22.698 ================== 00:19:22.698 Critical Warnings: 00:19:22.698 Available Spare Space: OK 00:19:22.698 Temperature: OK 00:19:22.698 Device Reliability: OK 00:19:22.698 Read Only: No 00:19:22.698 Volatile Memory Backup: OK 00:19:22.698 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:22.698 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:22.698 Available Spare: 0% 00:19:22.698 Available Sp[2024-11-20 14:38:34.622072] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:22.698 [2024-11-20 14:38:34.629953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:22.698 [2024-11-20 14:38:34.629982] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:19:22.698 [2024-11-20 14:38:34.629990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.698 [2024-11-20 14:38:34.629996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.698 [2024-11-20 14:38:34.630002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.698 [2024-11-20 14:38:34.630007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.698 [2024-11-20 14:38:34.630057] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:22.698 [2024-11-20 14:38:34.630068] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:22.698 [2024-11-20 14:38:34.631063] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:22.698 [2024-11-20 14:38:34.631106] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:19:22.698 [2024-11-20 14:38:34.631112] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:19:22.698 [2024-11-20 14:38:34.632069] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:22.698 [2024-11-20 14:38:34.632080] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:19:22.698 [2024-11-20 14:38:34.632128] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:22.698 [2024-11-20 14:38:34.633113] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:22.955 are Threshold: 0% 00:19:22.955 Life Percentage Used: 0% 00:19:22.955 Data Units Read: 0 00:19:22.955 Data Units Written: 0 00:19:22.955 Host Read Commands: 0 00:19:22.955 Host Write Commands: 0 00:19:22.955 Controller Busy Time: 0 minutes 00:19:22.955 Power Cycles: 0 00:19:22.955 Power On Hours: 0 hours 00:19:22.955 Unsafe Shutdowns: 0 00:19:22.955 Unrecoverable Media Errors: 0 00:19:22.955 Lifetime Error Log Entries: 0 00:19:22.955 Warning Temperature Time: 0 minutes 00:19:22.955 Critical Temperature Time: 0 minutes 00:19:22.955 00:19:22.955 Number of Queues 00:19:22.955 ================ 00:19:22.955 Number of I/O Submission Queues: 127 00:19:22.955 Number of I/O Completion Queues: 127 00:19:22.955 00:19:22.955 Active Namespaces 00:19:22.955 ================= 00:19:22.955 Namespace ID:1 00:19:22.955 Error Recovery Timeout: Unlimited 00:19:22.955 Command Set Identifier: NVM (00h) 00:19:22.955 Deallocate: Supported 00:19:22.955 Deallocated/Unwritten Error: Not Supported 00:19:22.955 Deallocated Read Value: Unknown 00:19:22.955 Deallocate in Write Zeroes: Not Supported 00:19:22.955 Deallocated Guard Field: 0xFFFF 00:19:22.955 Flush: Supported 00:19:22.955 Reservation: Supported 00:19:22.955 Namespace Sharing Capabilities: Multiple Controllers 00:19:22.955 Size (in LBAs): 131072 (0GiB) 00:19:22.955 Capacity (in LBAs): 131072 (0GiB) 00:19:22.955 Utilization (in LBAs): 131072 (0GiB) 00:19:22.955 NGUID: 9F7CE8B9F37E42F7898E0890A4B81770 00:19:22.955 UUID: 9f7ce8b9-f37e-42f7-898e-0890a4b81770 00:19:22.955 Thin Provisioning: Not Supported 00:19:22.955 Per-NS Atomic Units: Yes 00:19:22.955 Atomic Boundary Size (Normal): 0 00:19:22.955 Atomic Boundary Size (PFail): 0 00:19:22.955 Atomic Boundary Offset: 0 00:19:22.955 Maximum Single Source Range Length: 65535 00:19:22.955 Maximum Copy Length: 65535 00:19:22.955 Maximum Source Range Count: 1 00:19:22.955 NGUID/EUI64 Never Reused: No 00:19:22.955 Namespace Write Protected: No 00:19:22.955 Number of LBA Formats: 1 00:19:22.955 Current LBA Format: LBA Format #00 00:19:22.955 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:22.955 00:19:22.955 14:38:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:22.955 [2024-11-20 14:38:34.861329] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:28.215 Initializing NVMe Controllers 00:19:28.215 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:28.215 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:28.215 Initialization complete. Launching workers. 00:19:28.215 ======================================================== 00:19:28.215 Latency(us) 00:19:28.215 Device Information : IOPS MiB/s Average min max 00:19:28.215 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39913.81 155.91 3206.74 1005.64 8533.07 00:19:28.215 ======================================================== 00:19:28.215 Total : 39913.81 155.91 3206.74 1005.64 8533.07 00:19:28.215 00:19:28.216 [2024-11-20 14:38:39.963202] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:28.216 14:38:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:28.472 [2024-11-20 14:38:40.202901] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:33.772 Initializing NVMe Controllers 00:19:33.772 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:33.772 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:33.772 Initialization complete. Launching workers. 00:19:33.772 ======================================================== 00:19:33.772 Latency(us) 00:19:33.772 Device Information : IOPS MiB/s Average min max 00:19:33.772 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39925.95 155.96 3205.75 1015.18 7564.42 00:19:33.772 ======================================================== 00:19:33.772 Total : 39925.95 155.96 3205.75 1015.18 7564.42 00:19:33.772 00:19:33.772 [2024-11-20 14:38:45.221283] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:33.772 14:38:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:33.772 [2024-11-20 14:38:45.435507] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:39.033 [2024-11-20 14:38:50.580042] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:39.033 Initializing NVMe Controllers 00:19:39.033 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:39.033 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:39.033 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:39.033 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:39.033 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:39.033 Initialization complete. Launching workers. 00:19:39.033 Starting thread on core 2 00:19:39.033 Starting thread on core 3 00:19:39.033 Starting thread on core 1 00:19:39.033 14:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:39.033 [2024-11-20 14:38:50.875416] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:42.312 [2024-11-20 14:38:53.942247] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:42.312 Initializing NVMe Controllers 00:19:42.312 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:42.312 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:42.312 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:42.312 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:42.312 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:42.312 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:42.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:42.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:42.312 Initialization complete. Launching workers. 00:19:42.312 Starting thread on core 1 with urgent priority queue 00:19:42.312 Starting thread on core 2 with urgent priority queue 00:19:42.312 Starting thread on core 3 with urgent priority queue 00:19:42.312 Starting thread on core 0 with urgent priority queue 00:19:42.312 SPDK bdev Controller (SPDK2 ) core 0: 1227.33 IO/s 81.48 secs/100000 ios 00:19:42.312 SPDK bdev Controller (SPDK2 ) core 1: 1519.67 IO/s 65.80 secs/100000 ios 00:19:42.312 SPDK bdev Controller (SPDK2 ) core 2: 1184.00 IO/s 84.46 secs/100000 ios 00:19:42.312 SPDK bdev Controller (SPDK2 ) core 3: 1638.67 IO/s 61.03 secs/100000 ios 00:19:42.312 ======================================================== 00:19:42.312 00:19:42.312 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:42.312 [2024-11-20 14:38:54.231605] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:42.312 Initializing NVMe Controllers 00:19:42.312 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:42.312 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:42.312 Namespace ID: 1 size: 0GB 00:19:42.312 Initialization complete. 00:19:42.312 INFO: using host memory buffer for IO 00:19:42.312 Hello world! 00:19:42.312 [2024-11-20 14:38:54.241670] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:42.570 14:38:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:42.570 [2024-11-20 14:38:54.519904] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:43.942 Initializing NVMe Controllers 00:19:43.942 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:43.942 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:43.942 Initialization complete. Launching workers. 00:19:43.942 submit (in ns) avg, min, max = 8554.4, 3259.1, 4993950.4 00:19:43.942 complete (in ns) avg, min, max = 19201.2, 1767.0, 6990448.7 00:19:43.942 00:19:43.942 Submit histogram 00:19:43.942 ================ 00:19:43.942 Range in us Cumulative Count 00:19:43.942 3.256 - 3.270: 0.0124% ( 2) 00:19:43.942 3.270 - 3.283: 0.0311% ( 3) 00:19:43.942 3.283 - 3.297: 0.1307% ( 16) 00:19:43.942 3.297 - 3.311: 0.3921% ( 42) 00:19:43.942 3.311 - 3.325: 0.9336% ( 87) 00:19:43.942 3.325 - 3.339: 2.3030% ( 220) 00:19:43.942 3.339 - 3.353: 6.4857% ( 672) 00:19:43.942 3.353 - 3.367: 11.8947% ( 869) 00:19:43.942 3.367 - 3.381: 18.0070% ( 982) 00:19:43.942 3.381 - 3.395: 24.6794% ( 1072) 00:19:43.942 3.395 - 3.409: 30.5988% ( 951) 00:19:43.942 3.409 - 3.423: 35.7276% ( 824) 00:19:43.942 3.423 - 3.437: 40.7444% ( 806) 00:19:43.942 3.437 - 3.450: 45.6616% ( 790) 00:19:43.942 3.450 - 3.464: 49.8880% ( 679) 00:19:43.942 3.464 - 3.478: 54.1392% ( 683) 00:19:43.942 3.478 - 3.492: 59.7286% ( 898) 00:19:43.942 3.492 - 3.506: 66.6999% ( 1120) 00:19:43.942 3.506 - 3.520: 71.2747% ( 735) 00:19:43.942 3.520 - 3.534: 76.0986% ( 775) 00:19:43.942 3.534 - 3.548: 80.7046% ( 740) 00:19:43.942 3.548 - 3.562: 83.8852% ( 511) 00:19:43.942 3.562 - 3.590: 86.8044% ( 469) 00:19:43.942 3.590 - 3.617: 87.6074% ( 129) 00:19:43.942 3.617 - 3.645: 88.4290% ( 132) 00:19:43.942 3.645 - 3.673: 90.1718% ( 280) 00:19:43.942 3.673 - 3.701: 91.9084% ( 279) 00:19:43.942 3.701 - 3.729: 93.5703% ( 267) 00:19:43.942 3.729 - 3.757: 95.4251% ( 298) 00:19:43.942 3.757 - 3.784: 97.0248% ( 257) 00:19:43.942 3.784 - 3.812: 98.1887% ( 187) 00:19:43.942 3.812 - 3.840: 98.7738% ( 94) 00:19:43.942 3.840 - 3.868: 99.1597% ( 62) 00:19:43.942 3.868 - 3.896: 99.3962% ( 38) 00:19:43.942 3.896 - 3.923: 99.4958% ( 16) 00:19:43.942 3.923 - 3.951: 99.5332% ( 6) 00:19:43.942 3.979 - 4.007: 99.5456% ( 2) 00:19:43.942 4.007 - 4.035: 99.5518% ( 1) 00:19:43.942 4.035 - 4.063: 99.5643% ( 2) 00:19:43.942 4.063 - 4.090: 99.5767% ( 2) 00:19:43.942 4.090 - 4.118: 99.5830% ( 1) 00:19:43.942 4.202 - 4.230: 99.5892% ( 1) 00:19:43.942 4.285 - 4.313: 99.5954% ( 1) 00:19:43.942 4.313 - 4.341: 99.6016% ( 1) 00:19:43.942 5.009 - 5.037: 99.6079% ( 1) 00:19:43.942 5.148 - 5.176: 99.6265% ( 3) 00:19:43.942 5.203 - 5.231: 99.6328% ( 1) 00:19:43.942 5.231 - 5.259: 99.6390% ( 1) 00:19:43.942 5.287 - 5.315: 99.6514% ( 2) 00:19:43.942 5.343 - 5.370: 99.6577% ( 1) 00:19:43.942 5.370 - 5.398: 99.6639% ( 1) 00:19:43.942 5.398 - 5.426: 99.6763% ( 2) 00:19:43.942 5.426 - 5.454: 99.6826% ( 1) 00:19:43.942 5.482 - 5.510: 99.6950% ( 2) 00:19:43.942 5.510 - 5.537: 99.7012% ( 1) 00:19:43.942 5.621 - 5.649: 99.7075% ( 1) 00:19:43.942 5.843 - 5.871: 99.7137% ( 1) 00:19:43.942 5.871 - 5.899: 99.7199% ( 1) 00:19:43.942 5.955 - 5.983: 99.7261% ( 1) 00:19:43.942 5.983 - 6.010: 99.7324% ( 1) 00:19:43.942 6.038 - 6.066: 99.7386% ( 1) 00:19:43.942 6.177 - 6.205: 99.7448% ( 1) 00:19:43.942 6.261 - 6.289: 99.7510% ( 1) 00:19:43.942 6.289 - 6.317: 99.7573% ( 1) 00:19:43.942 6.372 - 6.400: 99.7635% ( 1) 00:19:43.942 6.428 - 6.456: 99.7697% ( 1) 00:19:43.942 6.817 - 6.845: 99.7759% ( 1) 00:19:43.942 6.845 - 6.873: 99.7821% ( 1) 00:19:43.942 6.873 - 6.901: 99.7884% ( 1) 00:19:43.942 6.957 - 6.984: 99.8008% ( 2) 00:19:43.942 7.068 - 7.096: 99.8070% ( 1) 00:19:43.942 7.123 - 7.179: 99.8195% ( 2) 00:19:43.942 7.179 - 7.235: 99.8319% ( 2) 00:19:43.942 7.346 - 7.402: 99.8382% ( 1) 00:19:43.942 7.402 - 7.457: 99.8444% ( 1) 00:19:43.942 7.457 - 7.513: 99.8506% ( 1) 00:19:43.942 7.513 - 7.569: 99.8568% ( 1) 00:19:43.942 8.070 - 8.125: 99.8631% ( 1) 00:19:43.942 8.181 - 8.237: 99.8693% ( 1) 00:19:43.942 [2024-11-20 14:38:55.612013] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:43.942 484.397 - 487.958: 99.8755% ( 1) 00:19:43.942 3989.148 - 4017.642: 99.9938% ( 19) 00:19:43.942 4986.435 - 5014.929: 100.0000% ( 1) 00:19:43.942 00:19:43.942 Complete histogram 00:19:43.942 ================== 00:19:43.942 Range in us Cumulative Count 00:19:43.942 1.767 - 1.774: 0.0373% ( 6) 00:19:43.942 1.774 - 1.781: 0.0871% ( 8) 00:19:43.942 1.781 - 1.795: 0.1494% ( 10) 00:19:43.942 1.795 - 1.809: 0.1743% ( 4) 00:19:43.942 1.809 - 1.823: 2.5271% ( 378) 00:19:43.942 1.823 - 1.837: 41.5287% ( 6266) 00:19:43.942 1.837 - 1.850: 68.3991% ( 4317) 00:19:43.942 1.850 - 1.864: 73.7520% ( 860) 00:19:43.942 1.864 - 1.878: 82.5781% ( 1418) 00:19:43.942 1.878 - 1.892: 92.5682% ( 1605) 00:19:43.942 1.892 - 1.906: 95.9417% ( 542) 00:19:43.942 1.906 - 1.920: 97.8277% ( 303) 00:19:43.942 1.920 - 1.934: 98.6618% ( 134) 00:19:43.943 1.934 - 1.948: 98.9419% ( 45) 00:19:43.943 1.948 - 1.962: 99.0975% ( 25) 00:19:43.943 1.962 - 1.976: 99.1846% ( 14) 00:19:43.943 1.976 - 1.990: 99.2220% ( 6) 00:19:43.943 1.990 - 2.003: 99.2406% ( 3) 00:19:43.943 2.003 - 2.017: 99.2655% ( 4) 00:19:43.943 2.031 - 2.045: 99.2780% ( 2) 00:19:43.943 2.045 - 2.059: 99.2967% ( 3) 00:19:43.943 2.059 - 2.073: 99.3153% ( 3) 00:19:43.943 2.101 - 2.115: 99.3278% ( 2) 00:19:43.943 2.143 - 2.157: 99.3464% ( 3) 00:19:43.943 2.157 - 2.170: 99.3527% ( 1) 00:19:43.943 2.184 - 2.198: 99.3651% ( 2) 00:19:43.943 2.212 - 2.226: 99.3713% ( 1) 00:19:43.943 2.254 - 2.268: 99.3776% ( 1) 00:19:43.943 2.337 - 2.351: 99.3838% ( 1) 00:19:43.943 2.351 - 2.365: 99.3900% ( 1) 00:19:43.943 2.407 - 2.421: 99.3962% ( 1) 00:19:43.943 2.574 - 2.588: 99.4025% ( 1) 00:19:43.943 3.784 - 3.812: 99.4087% ( 1) 00:19:43.943 3.812 - 3.840: 99.4149% ( 1) 00:19:43.943 3.868 - 3.896: 99.4211% ( 1) 00:19:43.943 3.923 - 3.951: 99.4336% ( 2) 00:19:43.943 4.007 - 4.035: 99.4398% ( 1) 00:19:43.943 4.257 - 4.285: 99.4460% ( 1) 00:19:43.943 4.341 - 4.369: 99.4523% ( 1) 00:19:43.943 4.369 - 4.397: 99.4585% ( 1) 00:19:43.943 4.424 - 4.452: 99.4647% ( 1) 00:19:43.943 4.536 - 4.563: 99.4709% ( 1) 00:19:43.943 4.703 - 4.730: 99.4772% ( 1) 00:19:43.943 4.730 - 4.758: 99.4834% ( 1) 00:19:43.943 5.343 - 5.370: 99.4896% ( 1) 00:19:43.943 5.537 - 5.565: 99.4958% ( 1) 00:19:43.943 5.816 - 5.843: 99.5021% ( 1) 00:19:43.943 5.955 - 5.983: 99.5083% ( 1) 00:19:43.943 6.010 - 6.038: 99.5145% ( 1) 00:19:43.943 6.066 - 6.094: 99.5207% ( 1) 00:19:43.943 6.233 - 6.261: 99.5270% ( 1) 00:19:43.943 6.261 - 6.289: 99.5332% ( 1) 00:19:43.943 7.235 - 7.290: 99.5394% ( 1) 00:19:43.943 8.515 - 8.570: 99.5456% ( 1) 00:19:43.943 8.849 - 8.904: 99.5518% ( 1) 00:19:43.943 11.743 - 11.798: 99.5581% ( 1) 00:19:43.943 13.802 - 13.857: 99.5643% ( 1) 00:19:43.943 1025.781 - 1032.904: 99.5705% ( 1) 00:19:43.943 1068.522 - 1075.645: 99.5767% ( 1) 00:19:43.943 2991.861 - 3006.108: 99.5830% ( 1) 00:19:43.943 3932.160 - 3960.654: 99.5892% ( 1) 00:19:43.943 3989.148 - 4017.642: 99.9876% ( 64) 00:19:43.943 6981.009 - 7009.503: 100.0000% ( 2) 00:19:43.943 00:19:43.943 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:43.943 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:43.943 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:43.943 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:43.943 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:43.943 [ 00:19:43.943 { 00:19:43.943 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:43.943 "subtype": "Discovery", 00:19:43.943 "listen_addresses": [], 00:19:43.943 "allow_any_host": true, 00:19:43.943 "hosts": [] 00:19:43.943 }, 00:19:43.943 { 00:19:43.943 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:43.943 "subtype": "NVMe", 00:19:43.943 "listen_addresses": [ 00:19:43.943 { 00:19:43.943 "trtype": "VFIOUSER", 00:19:43.943 "adrfam": "IPv4", 00:19:43.943 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:43.943 "trsvcid": "0" 00:19:43.943 } 00:19:43.943 ], 00:19:43.943 "allow_any_host": true, 00:19:43.943 "hosts": [], 00:19:43.943 "serial_number": "SPDK1", 00:19:43.943 "model_number": "SPDK bdev Controller", 00:19:43.943 "max_namespaces": 32, 00:19:43.943 "min_cntlid": 1, 00:19:43.943 "max_cntlid": 65519, 00:19:43.943 "namespaces": [ 00:19:43.943 { 00:19:43.943 "nsid": 1, 00:19:43.943 "bdev_name": "Malloc1", 00:19:43.943 "name": "Malloc1", 00:19:43.943 "nguid": "64CF97E8442B4E48BBD2F9343D27D738", 00:19:43.943 "uuid": "64cf97e8-442b-4e48-bbd2-f9343d27d738" 00:19:43.943 }, 00:19:43.943 { 00:19:43.943 "nsid": 2, 00:19:43.943 "bdev_name": "Malloc3", 00:19:43.943 "name": "Malloc3", 00:19:43.943 "nguid": "65E540FA0575413E85DDAC08E23C2D69", 00:19:43.943 "uuid": "65e540fa-0575-413e-85dd-ac08e23c2d69" 00:19:43.943 } 00:19:43.943 ] 00:19:43.943 }, 00:19:43.943 { 00:19:43.943 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:43.943 "subtype": "NVMe", 00:19:43.943 "listen_addresses": [ 00:19:43.943 { 00:19:43.943 "trtype": "VFIOUSER", 00:19:43.943 "adrfam": "IPv4", 00:19:43.943 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:43.943 "trsvcid": "0" 00:19:43.943 } 00:19:43.943 ], 00:19:43.943 "allow_any_host": true, 00:19:43.943 "hosts": [], 00:19:43.943 "serial_number": "SPDK2", 00:19:43.943 "model_number": "SPDK bdev Controller", 00:19:43.943 "max_namespaces": 32, 00:19:43.943 "min_cntlid": 1, 00:19:43.943 "max_cntlid": 65519, 00:19:43.943 "namespaces": [ 00:19:43.943 { 00:19:43.943 "nsid": 1, 00:19:43.943 "bdev_name": "Malloc2", 00:19:43.943 "name": "Malloc2", 00:19:43.943 "nguid": "9F7CE8B9F37E42F7898E0890A4B81770", 00:19:43.943 "uuid": "9f7ce8b9-f37e-42f7-898e-0890a4b81770" 00:19:43.943 } 00:19:43.943 ] 00:19:43.943 } 00:19:43.943 ] 00:19:43.943 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:43.943 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1564119 00:19:43.943 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:43.943 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:43.943 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:19:43.943 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:43.943 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:43.943 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:43.943 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:43.943 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:44.201 [2024-11-20 14:38:56.012350] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:44.201 Malloc4 00:19:44.201 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:44.458 [2024-11-20 14:38:56.262296] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:44.458 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:44.458 Asynchronous Event Request test 00:19:44.458 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:44.458 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:44.458 Registering asynchronous event callbacks... 00:19:44.458 Starting namespace attribute notice tests for all controllers... 00:19:44.458 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:44.458 aer_cb - Changed Namespace 00:19:44.458 Cleaning up... 00:19:44.716 [ 00:19:44.716 { 00:19:44.716 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:44.716 "subtype": "Discovery", 00:19:44.716 "listen_addresses": [], 00:19:44.716 "allow_any_host": true, 00:19:44.716 "hosts": [] 00:19:44.716 }, 00:19:44.716 { 00:19:44.716 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:44.716 "subtype": "NVMe", 00:19:44.716 "listen_addresses": [ 00:19:44.716 { 00:19:44.716 "trtype": "VFIOUSER", 00:19:44.716 "adrfam": "IPv4", 00:19:44.716 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:44.716 "trsvcid": "0" 00:19:44.716 } 00:19:44.716 ], 00:19:44.716 "allow_any_host": true, 00:19:44.716 "hosts": [], 00:19:44.716 "serial_number": "SPDK1", 00:19:44.716 "model_number": "SPDK bdev Controller", 00:19:44.716 "max_namespaces": 32, 00:19:44.716 "min_cntlid": 1, 00:19:44.716 "max_cntlid": 65519, 00:19:44.716 "namespaces": [ 00:19:44.716 { 00:19:44.716 "nsid": 1, 00:19:44.716 "bdev_name": "Malloc1", 00:19:44.716 "name": "Malloc1", 00:19:44.716 "nguid": "64CF97E8442B4E48BBD2F9343D27D738", 00:19:44.716 "uuid": "64cf97e8-442b-4e48-bbd2-f9343d27d738" 00:19:44.716 }, 00:19:44.716 { 00:19:44.716 "nsid": 2, 00:19:44.717 "bdev_name": "Malloc3", 00:19:44.717 "name": "Malloc3", 00:19:44.717 "nguid": "65E540FA0575413E85DDAC08E23C2D69", 00:19:44.717 "uuid": "65e540fa-0575-413e-85dd-ac08e23c2d69" 00:19:44.717 } 00:19:44.717 ] 00:19:44.717 }, 00:19:44.717 { 00:19:44.717 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:44.717 "subtype": "NVMe", 00:19:44.717 "listen_addresses": [ 00:19:44.717 { 00:19:44.717 "trtype": "VFIOUSER", 00:19:44.717 "adrfam": "IPv4", 00:19:44.717 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:44.717 "trsvcid": "0" 00:19:44.717 } 00:19:44.717 ], 00:19:44.717 "allow_any_host": true, 00:19:44.717 "hosts": [], 00:19:44.717 "serial_number": "SPDK2", 00:19:44.717 "model_number": "SPDK bdev Controller", 00:19:44.717 "max_namespaces": 32, 00:19:44.717 "min_cntlid": 1, 00:19:44.717 "max_cntlid": 65519, 00:19:44.717 "namespaces": [ 00:19:44.717 { 00:19:44.717 "nsid": 1, 00:19:44.717 "bdev_name": "Malloc2", 00:19:44.717 "name": "Malloc2", 00:19:44.717 "nguid": "9F7CE8B9F37E42F7898E0890A4B81770", 00:19:44.717 "uuid": "9f7ce8b9-f37e-42f7-898e-0890a4b81770" 00:19:44.717 }, 00:19:44.717 { 00:19:44.717 "nsid": 2, 00:19:44.717 "bdev_name": "Malloc4", 00:19:44.717 "name": "Malloc4", 00:19:44.717 "nguid": "1AAFE6D8DE91403F8AE0571BF4E1B97B", 00:19:44.717 "uuid": "1aafe6d8-de91-403f-8ae0-571bf4e1b97b" 00:19:44.717 } 00:19:44.717 ] 00:19:44.717 } 00:19:44.717 ] 00:19:44.717 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1564119 00:19:44.717 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:44.717 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1556504 00:19:44.717 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1556504 ']' 00:19:44.717 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1556504 00:19:44.717 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:44.717 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.717 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1556504 00:19:44.717 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:44.717 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:44.717 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1556504' 00:19:44.717 killing process with pid 1556504 00:19:44.717 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1556504 00:19:44.717 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1556504 00:19:44.976 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:44.976 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:44.976 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:44.976 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:44.976 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:44.976 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1564355 00:19:44.976 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:44.976 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1564355' 00:19:44.976 Process pid: 1564355 00:19:44.976 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:44.976 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1564355 00:19:44.976 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1564355 ']' 00:19:44.976 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.976 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.976 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.976 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.976 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:44.976 [2024-11-20 14:38:56.831612] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:44.976 [2024-11-20 14:38:56.832467] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:19:44.976 [2024-11-20 14:38:56.832507] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.976 [2024-11-20 14:38:56.904744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:45.236 [2024-11-20 14:38:56.943684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.236 [2024-11-20 14:38:56.943719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.236 [2024-11-20 14:38:56.943726] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.236 [2024-11-20 14:38:56.943732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.236 [2024-11-20 14:38:56.943738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.236 [2024-11-20 14:38:56.945204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.236 [2024-11-20 14:38:56.945311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.236 [2024-11-20 14:38:56.945420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.236 [2024-11-20 14:38:56.945421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:45.236 [2024-11-20 14:38:57.014916] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:45.236 [2024-11-20 14:38:57.015551] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:45.236 [2024-11-20 14:38:57.015908] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:45.236 [2024-11-20 14:38:57.016242] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:45.236 [2024-11-20 14:38:57.016298] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:45.236 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.236 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:19:45.236 14:38:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:46.173 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:46.432 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:46.432 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:46.433 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:46.433 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:46.433 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:46.692 Malloc1 00:19:46.692 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:46.949 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:46.949 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:47.207 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:47.207 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:47.207 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:47.491 Malloc2 00:19:47.491 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:47.748 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:48.005 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:48.005 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:48.005 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1564355 00:19:48.005 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1564355 ']' 00:19:48.005 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1564355 00:19:48.005 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:48.005 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.005 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1564355 00:19:48.005 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:48.005 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:48.005 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1564355' 00:19:48.005 killing process with pid 1564355 00:19:48.005 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1564355 00:19:48.005 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1564355 00:19:48.264 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:48.264 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:48.264 00:19:48.264 real 0m50.878s 00:19:48.264 user 3m16.902s 00:19:48.264 sys 0m3.221s 00:19:48.264 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.264 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:48.264 ************************************ 00:19:48.264 END TEST nvmf_vfio_user 00:19:48.264 ************************************ 00:19:48.264 14:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:48.264 14:39:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:48.264 14:39:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.264 14:39:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:48.524 ************************************ 00:19:48.524 START TEST nvmf_vfio_user_nvme_compliance 00:19:48.524 ************************************ 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:48.524 * Looking for test storage... 00:19:48.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:48.524 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:48.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.525 --rc genhtml_branch_coverage=1 00:19:48.525 --rc genhtml_function_coverage=1 00:19:48.525 --rc genhtml_legend=1 00:19:48.525 --rc geninfo_all_blocks=1 00:19:48.525 --rc geninfo_unexecuted_blocks=1 00:19:48.525 00:19:48.525 ' 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:48.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.525 --rc genhtml_branch_coverage=1 00:19:48.525 --rc genhtml_function_coverage=1 00:19:48.525 --rc genhtml_legend=1 00:19:48.525 --rc geninfo_all_blocks=1 00:19:48.525 --rc geninfo_unexecuted_blocks=1 00:19:48.525 00:19:48.525 ' 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:48.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.525 --rc genhtml_branch_coverage=1 00:19:48.525 --rc genhtml_function_coverage=1 00:19:48.525 --rc genhtml_legend=1 00:19:48.525 --rc geninfo_all_blocks=1 00:19:48.525 --rc geninfo_unexecuted_blocks=1 00:19:48.525 00:19:48.525 ' 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:48.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.525 --rc genhtml_branch_coverage=1 00:19:48.525 --rc genhtml_function_coverage=1 00:19:48.525 --rc genhtml_legend=1 00:19:48.525 --rc geninfo_all_blocks=1 00:19:48.525 --rc geninfo_unexecuted_blocks=1 00:19:48.525 00:19:48.525 ' 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:48.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1565037 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1565037' 00:19:48.525 Process pid: 1565037 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1565037 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1565037 ']' 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.525 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.526 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.526 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.526 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:48.785 [2024-11-20 14:39:00.485702] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:19:48.785 [2024-11-20 14:39:00.485754] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.785 [2024-11-20 14:39:00.560615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:48.785 [2024-11-20 14:39:00.602875] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.785 [2024-11-20 14:39:00.602914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.785 [2024-11-20 14:39:00.602922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.785 [2024-11-20 14:39:00.602929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.785 [2024-11-20 14:39:00.602934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.785 [2024-11-20 14:39:00.604382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.785 [2024-11-20 14:39:00.604490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.785 [2024-11-20 14:39:00.604490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.785 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.785 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:19:48.785 14:39:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:50.161 malloc0 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.161 14:39:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:50.161 00:19:50.161 00:19:50.161 CUnit - A unit testing framework for C - Version 2.1-3 00:19:50.162 http://cunit.sourceforge.net/ 00:19:50.162 00:19:50.162 00:19:50.162 Suite: nvme_compliance 00:19:50.162 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 14:39:01.948424] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:50.162 [2024-11-20 14:39:01.949776] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:50.162 [2024-11-20 14:39:01.949791] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:50.162 [2024-11-20 14:39:01.949798] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:50.162 [2024-11-20 14:39:01.951449] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:50.162 passed 00:19:50.162 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 14:39:02.032034] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:50.162 [2024-11-20 14:39:02.035059] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:50.162 passed 00:19:50.162 Test: admin_identify_ns ...[2024-11-20 14:39:02.115142] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:50.420 [2024-11-20 14:39:02.175960] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:50.420 [2024-11-20 14:39:02.183960] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:50.420 [2024-11-20 14:39:02.205051] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:50.420 passed 00:19:50.420 Test: admin_get_features_mandatory_features ...[2024-11-20 14:39:02.281294] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:50.420 [2024-11-20 14:39:02.284314] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:50.420 passed 00:19:50.420 Test: admin_get_features_optional_features ...[2024-11-20 14:39:02.363814] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:50.420 [2024-11-20 14:39:02.366832] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:50.678 passed 00:19:50.678 Test: admin_set_features_number_of_queues ...[2024-11-20 14:39:02.444826] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:50.678 [2024-11-20 14:39:02.550049] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:50.678 passed 00:19:50.678 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 14:39:02.626286] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:50.678 [2024-11-20 14:39:02.629309] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:50.937 passed 00:19:50.937 Test: admin_get_log_page_with_lpo ...[2024-11-20 14:39:02.707335] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:50.937 [2024-11-20 14:39:02.775963] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:50.937 [2024-11-20 14:39:02.789018] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:50.937 passed 00:19:50.937 Test: fabric_property_get ...[2024-11-20 14:39:02.866239] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:50.937 [2024-11-20 14:39:02.867476] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:50.937 [2024-11-20 14:39:02.869266] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:51.196 passed 00:19:51.196 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 14:39:02.949778] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:51.196 [2024-11-20 14:39:02.951010] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:51.196 [2024-11-20 14:39:02.952796] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:51.196 passed 00:19:51.196 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 14:39:03.030864] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:51.196 [2024-11-20 14:39:03.115961] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:51.196 [2024-11-20 14:39:03.131958] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:51.196 [2024-11-20 14:39:03.137040] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:51.455 passed 00:19:51.455 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 14:39:03.212321] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:51.455 [2024-11-20 14:39:03.213554] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:51.455 [2024-11-20 14:39:03.215337] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:51.455 passed 00:19:51.455 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 14:39:03.294519] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:51.455 [2024-11-20 14:39:03.370954] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:51.455 [2024-11-20 14:39:03.394955] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:51.455 [2024-11-20 14:39:03.400034] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:51.713 passed 00:19:51.713 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 14:39:03.475355] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:51.713 [2024-11-20 14:39:03.476594] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:51.713 [2024-11-20 14:39:03.476617] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:51.713 [2024-11-20 14:39:03.478376] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:51.713 passed 00:19:51.713 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 14:39:03.554485] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:51.713 [2024-11-20 14:39:03.649957] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:51.713 [2024-11-20 14:39:03.657956] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:51.713 [2024-11-20 14:39:03.665953] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:51.970 [2024-11-20 14:39:03.673958] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:51.970 [2024-11-20 14:39:03.703036] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:51.970 passed 00:19:51.970 Test: admin_create_io_sq_verify_pc ...[2024-11-20 14:39:03.780486] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:51.970 [2024-11-20 14:39:03.800961] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:51.970 [2024-11-20 14:39:03.821688] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:51.970 passed 00:19:51.970 Test: admin_create_io_qp_max_qps ...[2024-11-20 14:39:03.898238] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:53.443 [2024-11-20 14:39:05.001957] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:53.443 [2024-11-20 14:39:05.384115] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:53.702 passed 00:19:53.702 Test: admin_create_io_sq_shared_cq ...[2024-11-20 14:39:05.462471] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:53.702 [2024-11-20 14:39:05.597957] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:53.702 [2024-11-20 14:39:05.635017] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:53.960 passed 00:19:53.960 00:19:53.960 Run Summary: Type Total Ran Passed Failed Inactive 00:19:53.960 suites 1 1 n/a 0 0 00:19:53.960 tests 18 18 18 0 0 00:19:53.960 asserts 360 360 360 0 n/a 00:19:53.960 00:19:53.960 Elapsed time = 1.518 seconds 00:19:53.960 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1565037 00:19:53.960 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1565037 ']' 00:19:53.960 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1565037 00:19:53.960 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:19:53.960 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.960 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1565037 00:19:53.960 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.960 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.960 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1565037' 00:19:53.960 killing process with pid 1565037 00:19:53.960 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1565037 00:19:53.960 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1565037 00:19:53.960 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:54.219 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:54.219 00:19:54.219 real 0m5.692s 00:19:54.219 user 0m15.937s 00:19:54.219 sys 0m0.500s 00:19:54.219 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.219 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:54.219 ************************************ 00:19:54.219 END TEST nvmf_vfio_user_nvme_compliance 00:19:54.219 ************************************ 00:19:54.219 14:39:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:54.219 14:39:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:54.219 14:39:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.219 14:39:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:54.219 ************************************ 00:19:54.219 START TEST nvmf_vfio_user_fuzz 00:19:54.219 ************************************ 00:19:54.219 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:54.219 * Looking for test storage... 00:19:54.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:54.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.219 --rc genhtml_branch_coverage=1 00:19:54.219 --rc genhtml_function_coverage=1 00:19:54.219 --rc genhtml_legend=1 00:19:54.219 --rc geninfo_all_blocks=1 00:19:54.219 --rc geninfo_unexecuted_blocks=1 00:19:54.219 00:19:54.219 ' 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:54.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.219 --rc genhtml_branch_coverage=1 00:19:54.219 --rc genhtml_function_coverage=1 00:19:54.219 --rc genhtml_legend=1 00:19:54.219 --rc geninfo_all_blocks=1 00:19:54.219 --rc geninfo_unexecuted_blocks=1 00:19:54.219 00:19:54.219 ' 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:54.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.219 --rc genhtml_branch_coverage=1 00:19:54.219 --rc genhtml_function_coverage=1 00:19:54.219 --rc genhtml_legend=1 00:19:54.219 --rc geninfo_all_blocks=1 00:19:54.219 --rc geninfo_unexecuted_blocks=1 00:19:54.219 00:19:54.219 ' 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:54.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.219 --rc genhtml_branch_coverage=1 00:19:54.219 --rc genhtml_function_coverage=1 00:19:54.219 --rc genhtml_legend=1 00:19:54.219 --rc geninfo_all_blocks=1 00:19:54.219 --rc geninfo_unexecuted_blocks=1 00:19:54.219 00:19:54.219 ' 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:54.219 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:54.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1566193 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1566193' 00:19:54.479 Process pid: 1566193 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1566193 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1566193 ']' 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.479 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:54.739 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.739 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:19:54.739 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:55.676 malloc0 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:55.676 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:27.742 Fuzzing completed. Shutting down the fuzz application 00:20:27.742 00:20:27.742 Dumping successful admin opcodes: 00:20:27.742 8, 9, 10, 24, 00:20:27.742 Dumping successful io opcodes: 00:20:27.742 0, 00:20:27.742 NS: 0x20000081ef00 I/O qp, Total commands completed: 1122454, total successful commands: 4419, random_seed: 2260650944 00:20:27.742 NS: 0x20000081ef00 admin qp, Total commands completed: 277104, total successful commands: 2236, random_seed: 2793825536 00:20:27.742 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:27.742 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.742 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:27.742 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.742 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1566193 00:20:27.742 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1566193 ']' 00:20:27.742 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1566193 00:20:27.742 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:20:27.742 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.742 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1566193 00:20:27.742 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:27.742 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:27.742 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1566193' 00:20:27.742 killing process with pid 1566193 00:20:27.742 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1566193 00:20:27.742 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1566193 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:27.742 00:20:27.742 real 0m32.253s 00:20:27.742 user 0m34.134s 00:20:27.742 sys 0m27.332s 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:27.742 ************************************ 00:20:27.742 END TEST nvmf_vfio_user_fuzz 00:20:27.742 ************************************ 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:27.742 ************************************ 00:20:27.742 START TEST nvmf_auth_target 00:20:27.742 ************************************ 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:27.742 * Looking for test storage... 00:20:27.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:27.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.742 --rc genhtml_branch_coverage=1 00:20:27.742 --rc genhtml_function_coverage=1 00:20:27.742 --rc genhtml_legend=1 00:20:27.742 --rc geninfo_all_blocks=1 00:20:27.742 --rc geninfo_unexecuted_blocks=1 00:20:27.742 00:20:27.742 ' 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:27.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.742 --rc genhtml_branch_coverage=1 00:20:27.742 --rc genhtml_function_coverage=1 00:20:27.742 --rc genhtml_legend=1 00:20:27.742 --rc geninfo_all_blocks=1 00:20:27.742 --rc geninfo_unexecuted_blocks=1 00:20:27.742 00:20:27.742 ' 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:27.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.742 --rc genhtml_branch_coverage=1 00:20:27.742 --rc genhtml_function_coverage=1 00:20:27.742 --rc genhtml_legend=1 00:20:27.742 --rc geninfo_all_blocks=1 00:20:27.742 --rc geninfo_unexecuted_blocks=1 00:20:27.742 00:20:27.742 ' 00:20:27.742 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:27.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.742 --rc genhtml_branch_coverage=1 00:20:27.742 --rc genhtml_function_coverage=1 00:20:27.742 --rc genhtml_legend=1 00:20:27.742 --rc geninfo_all_blocks=1 00:20:27.743 --rc geninfo_unexecuted_blocks=1 00:20:27.743 00:20:27.743 ' 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:27.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:27.743 14:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:33.019 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:33.019 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:33.019 Found net devices under 0000:86:00.0: cvl_0_0 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:33.019 Found net devices under 0000:86:00.1: cvl_0_1 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:33.019 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:33.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:20:33.020 00:20:33.020 --- 10.0.0.2 ping statistics --- 00:20:33.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.020 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:33.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:20:33.020 00:20:33.020 --- 10.0.0.1 ping statistics --- 00:20:33.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.020 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1574920 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1574920 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1574920 ']' 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1574939 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f6c366b3ce0234e31691b5f75064bfdc141b17bb43d0b512 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.F7e 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f6c366b3ce0234e31691b5f75064bfdc141b17bb43d0b512 0 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f6c366b3ce0234e31691b5f75064bfdc141b17bb43d0b512 0 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f6c366b3ce0234e31691b5f75064bfdc141b17bb43d0b512 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.F7e 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.F7e 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.F7e 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d53ddcedc2462dbb7a54a53fc90c8e574f7ddb0bb53603f1c91d47308398dcca 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.mk6 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d53ddcedc2462dbb7a54a53fc90c8e574f7ddb0bb53603f1c91d47308398dcca 3 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d53ddcedc2462dbb7a54a53fc90c8e574f7ddb0bb53603f1c91d47308398dcca 3 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d53ddcedc2462dbb7a54a53fc90c8e574f7ddb0bb53603f1c91d47308398dcca 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.mk6 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.mk6 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.mk6 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=21923199e9a1cac17fe31a6ac549589b 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:33.020 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.gey 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 21923199e9a1cac17fe31a6ac549589b 1 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 21923199e9a1cac17fe31a6ac549589b 1 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=21923199e9a1cac17fe31a6ac549589b 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.gey 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.gey 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.gey 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=313625b06f241666d5a7a875e82ba00ea4b4aceb9acaa8b8 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ADY 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 313625b06f241666d5a7a875e82ba00ea4b4aceb9acaa8b8 2 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 313625b06f241666d5a7a875e82ba00ea4b4aceb9acaa8b8 2 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=313625b06f241666d5a7a875e82ba00ea4b4aceb9acaa8b8 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:33.021 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:33.280 14:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ADY 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ADY 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.ADY 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=78ad560129863c561c7d3a066919c1dc69b7db6ba365efdd 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.RM4 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 78ad560129863c561c7d3a066919c1dc69b7db6ba365efdd 2 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 78ad560129863c561c7d3a066919c1dc69b7db6ba365efdd 2 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=78ad560129863c561c7d3a066919c1dc69b7db6ba365efdd 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.RM4 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.RM4 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.RM4 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e4baaaf4664a4a6234cb989b0e640370 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.gV9 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e4baaaf4664a4a6234cb989b0e640370 1 00:20:33.280 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e4baaaf4664a4a6234cb989b0e640370 1 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e4baaaf4664a4a6234cb989b0e640370 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.gV9 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.gV9 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.gV9 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3076b0015a0e4ff2d6f2f34aae45e4832abe3c92468a01fbf21f77427f77945f 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.bXe 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3076b0015a0e4ff2d6f2f34aae45e4832abe3c92468a01fbf21f77427f77945f 3 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3076b0015a0e4ff2d6f2f34aae45e4832abe3c92468a01fbf21f77427f77945f 3 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3076b0015a0e4ff2d6f2f34aae45e4832abe3c92468a01fbf21f77427f77945f 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.bXe 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.bXe 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.bXe 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1574920 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1574920 ']' 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.281 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.540 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:33.540 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:33.540 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1574939 /var/tmp/host.sock 00:20:33.540 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1574939 ']' 00:20:33.540 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:20:33.540 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.540 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:33.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:33.540 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.540 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.799 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:33.799 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:33.799 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:33.799 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.799 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.799 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.799 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:33.799 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.F7e 00:20:33.799 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.799 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.799 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.799 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.F7e 00:20:33.799 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.F7e 00:20:34.058 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.mk6 ]] 00:20:34.058 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mk6 00:20:34.058 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.058 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.058 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.058 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mk6 00:20:34.058 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mk6 00:20:34.316 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:34.316 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.gey 00:20:34.316 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.316 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.316 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.316 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.gey 00:20:34.316 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.gey 00:20:34.316 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.ADY ]] 00:20:34.316 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ADY 00:20:34.316 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.316 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.316 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.316 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ADY 00:20:34.316 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ADY 00:20:34.575 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:34.575 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.RM4 00:20:34.575 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.575 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.575 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.575 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.RM4 00:20:34.575 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.RM4 00:20:34.833 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.gV9 ]] 00:20:34.833 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gV9 00:20:34.833 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.833 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.833 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.833 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gV9 00:20:34.833 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gV9 00:20:35.092 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:35.092 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.bXe 00:20:35.092 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.092 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.092 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.092 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.bXe 00:20:35.092 14:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.bXe 00:20:35.351 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:35.351 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:35.351 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.351 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.351 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:35.351 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:35.351 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:35.351 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.351 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:35.351 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:35.351 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:35.351 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.351 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.351 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.351 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.351 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.351 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.351 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.351 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.608 00:20:35.608 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.608 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.608 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.866 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.866 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.866 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.866 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.866 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.866 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.866 { 00:20:35.866 "cntlid": 1, 00:20:35.866 "qid": 0, 00:20:35.866 "state": "enabled", 00:20:35.866 "thread": "nvmf_tgt_poll_group_000", 00:20:35.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:35.866 "listen_address": { 00:20:35.866 "trtype": "TCP", 00:20:35.866 "adrfam": "IPv4", 00:20:35.866 "traddr": "10.0.0.2", 00:20:35.866 "trsvcid": "4420" 00:20:35.866 }, 00:20:35.866 "peer_address": { 00:20:35.866 "trtype": "TCP", 00:20:35.866 "adrfam": "IPv4", 00:20:35.866 "traddr": "10.0.0.1", 00:20:35.866 "trsvcid": "37488" 00:20:35.866 }, 00:20:35.866 "auth": { 00:20:35.866 "state": "completed", 00:20:35.866 "digest": "sha256", 00:20:35.866 "dhgroup": "null" 00:20:35.866 } 00:20:35.866 } 00:20:35.866 ]' 00:20:35.866 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.866 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.866 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.866 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:35.866 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.124 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.124 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.124 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.124 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:20:36.124 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:20:36.841 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.841 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:36.841 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.841 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.841 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.841 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.841 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:36.841 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:37.100 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:37.100 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.100 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:37.100 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:37.100 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:37.100 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.100 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.100 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.100 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.100 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.100 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.100 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.100 14:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.359 00:20:37.359 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.359 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.359 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.619 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.619 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.619 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.619 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.619 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.619 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.619 { 00:20:37.619 "cntlid": 3, 00:20:37.619 "qid": 0, 00:20:37.619 "state": "enabled", 00:20:37.619 "thread": "nvmf_tgt_poll_group_000", 00:20:37.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:37.619 "listen_address": { 00:20:37.619 "trtype": "TCP", 00:20:37.619 "adrfam": "IPv4", 00:20:37.619 "traddr": "10.0.0.2", 00:20:37.619 "trsvcid": "4420" 00:20:37.619 }, 00:20:37.619 "peer_address": { 00:20:37.619 "trtype": "TCP", 00:20:37.619 "adrfam": "IPv4", 00:20:37.619 "traddr": "10.0.0.1", 00:20:37.619 "trsvcid": "37522" 00:20:37.619 }, 00:20:37.619 "auth": { 00:20:37.619 "state": "completed", 00:20:37.619 "digest": "sha256", 00:20:37.619 "dhgroup": "null" 00:20:37.619 } 00:20:37.619 } 00:20:37.619 ]' 00:20:37.619 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.619 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.619 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.619 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:37.619 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.619 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.619 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.619 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.879 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:20:37.879 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:20:38.447 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.447 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:38.447 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.447 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.447 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.447 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.447 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:38.447 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:38.706 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:38.706 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.706 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:38.706 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:38.706 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:38.706 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.706 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.706 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.706 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.706 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.706 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.706 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.706 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.966 00:20:38.966 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.966 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.966 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.226 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.226 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.226 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.226 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.226 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.226 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.226 { 00:20:39.226 "cntlid": 5, 00:20:39.226 "qid": 0, 00:20:39.226 "state": "enabled", 00:20:39.226 "thread": "nvmf_tgt_poll_group_000", 00:20:39.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:39.226 "listen_address": { 00:20:39.226 "trtype": "TCP", 00:20:39.226 "adrfam": "IPv4", 00:20:39.226 "traddr": "10.0.0.2", 00:20:39.226 "trsvcid": "4420" 00:20:39.226 }, 00:20:39.226 "peer_address": { 00:20:39.226 "trtype": "TCP", 00:20:39.226 "adrfam": "IPv4", 00:20:39.226 "traddr": "10.0.0.1", 00:20:39.226 "trsvcid": "37546" 00:20:39.226 }, 00:20:39.226 "auth": { 00:20:39.226 "state": "completed", 00:20:39.226 "digest": "sha256", 00:20:39.226 "dhgroup": "null" 00:20:39.226 } 00:20:39.226 } 00:20:39.226 ]' 00:20:39.226 14:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.226 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.226 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.227 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:39.227 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.227 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.227 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.227 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.486 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:20:39.486 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:20:40.055 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.055 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:40.055 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.055 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.055 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.055 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.055 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:40.055 14:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:40.314 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:40.314 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.314 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:40.314 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:40.314 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:40.314 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.314 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:40.314 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.314 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.314 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.314 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:40.314 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.315 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.573 00:20:40.573 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.573 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.573 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.834 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.834 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.834 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.834 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.834 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.834 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.834 { 00:20:40.834 "cntlid": 7, 00:20:40.834 "qid": 0, 00:20:40.834 "state": "enabled", 00:20:40.834 "thread": "nvmf_tgt_poll_group_000", 00:20:40.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:40.834 "listen_address": { 00:20:40.834 "trtype": "TCP", 00:20:40.834 "adrfam": "IPv4", 00:20:40.834 "traddr": "10.0.0.2", 00:20:40.834 "trsvcid": "4420" 00:20:40.834 }, 00:20:40.834 "peer_address": { 00:20:40.834 "trtype": "TCP", 00:20:40.834 "adrfam": "IPv4", 00:20:40.834 "traddr": "10.0.0.1", 00:20:40.834 "trsvcid": "37580" 00:20:40.834 }, 00:20:40.834 "auth": { 00:20:40.834 "state": "completed", 00:20:40.834 "digest": "sha256", 00:20:40.834 "dhgroup": "null" 00:20:40.834 } 00:20:40.834 } 00:20:40.834 ]' 00:20:40.834 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.834 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.834 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.834 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:40.834 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.834 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.834 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.834 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.093 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:20:41.093 14:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:20:41.663 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.663 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:41.663 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.663 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.663 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.663 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.663 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.663 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:41.663 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:41.922 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:41.922 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.922 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:41.922 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:41.922 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:41.922 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.922 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.922 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.922 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.922 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.922 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.922 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.922 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.182 00:20:42.182 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.182 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.182 14:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.442 14:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.442 14:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.442 14:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.442 14:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.442 14:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.442 14:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.442 { 00:20:42.442 "cntlid": 9, 00:20:42.442 "qid": 0, 00:20:42.442 "state": "enabled", 00:20:42.442 "thread": "nvmf_tgt_poll_group_000", 00:20:42.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:42.442 "listen_address": { 00:20:42.442 "trtype": "TCP", 00:20:42.442 "adrfam": "IPv4", 00:20:42.442 "traddr": "10.0.0.2", 00:20:42.442 "trsvcid": "4420" 00:20:42.442 }, 00:20:42.442 "peer_address": { 00:20:42.442 "trtype": "TCP", 00:20:42.442 "adrfam": "IPv4", 00:20:42.442 "traddr": "10.0.0.1", 00:20:42.442 "trsvcid": "43332" 00:20:42.442 }, 00:20:42.442 "auth": { 00:20:42.442 "state": "completed", 00:20:42.442 "digest": "sha256", 00:20:42.442 "dhgroup": "ffdhe2048" 00:20:42.442 } 00:20:42.442 } 00:20:42.442 ]' 00:20:42.442 14:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.442 14:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.442 14:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.442 14:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:42.443 14:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.443 14:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.443 14:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.443 14:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.702 14:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:20:42.702 14:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:20:43.271 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.271 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:43.271 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.271 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.271 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.271 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.271 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:43.271 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:43.530 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:43.530 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.530 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:43.530 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:43.530 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:43.530 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.530 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.530 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.530 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.530 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.530 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.530 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.530 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.790 00:20:43.790 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.790 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.790 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.790 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.790 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.790 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.790 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.790 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.790 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.790 { 00:20:43.790 "cntlid": 11, 00:20:43.790 "qid": 0, 00:20:43.790 "state": "enabled", 00:20:43.790 "thread": "nvmf_tgt_poll_group_000", 00:20:43.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:43.790 "listen_address": { 00:20:43.790 "trtype": "TCP", 00:20:43.790 "adrfam": "IPv4", 00:20:43.790 "traddr": "10.0.0.2", 00:20:43.790 "trsvcid": "4420" 00:20:43.790 }, 00:20:43.790 "peer_address": { 00:20:43.790 "trtype": "TCP", 00:20:43.790 "adrfam": "IPv4", 00:20:43.790 "traddr": "10.0.0.1", 00:20:43.790 "trsvcid": "43364" 00:20:43.790 }, 00:20:43.790 "auth": { 00:20:43.790 "state": "completed", 00:20:43.790 "digest": "sha256", 00:20:43.790 "dhgroup": "ffdhe2048" 00:20:43.790 } 00:20:43.790 } 00:20:43.790 ]' 00:20:43.790 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.049 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.049 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.049 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:44.049 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.049 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.049 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.049 14:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.309 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:20:44.309 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:20:44.879 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.879 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:44.879 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.879 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.879 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.879 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.879 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:44.879 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:44.879 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:44.879 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.879 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:44.879 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:44.879 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:44.879 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.880 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.880 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.880 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.880 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.880 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.880 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.880 14:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.139 00:20:45.139 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.139 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.139 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.398 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.398 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.398 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.398 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.398 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.398 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.398 { 00:20:45.398 "cntlid": 13, 00:20:45.398 "qid": 0, 00:20:45.398 "state": "enabled", 00:20:45.398 "thread": "nvmf_tgt_poll_group_000", 00:20:45.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:45.398 "listen_address": { 00:20:45.398 "trtype": "TCP", 00:20:45.398 "adrfam": "IPv4", 00:20:45.398 "traddr": "10.0.0.2", 00:20:45.398 "trsvcid": "4420" 00:20:45.398 }, 00:20:45.398 "peer_address": { 00:20:45.398 "trtype": "TCP", 00:20:45.398 "adrfam": "IPv4", 00:20:45.398 "traddr": "10.0.0.1", 00:20:45.398 "trsvcid": "43400" 00:20:45.398 }, 00:20:45.398 "auth": { 00:20:45.398 "state": "completed", 00:20:45.398 "digest": "sha256", 00:20:45.398 "dhgroup": "ffdhe2048" 00:20:45.398 } 00:20:45.398 } 00:20:45.398 ]' 00:20:45.398 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.398 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:45.398 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.657 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:45.657 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.657 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.657 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.657 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.916 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:20:45.916 14:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.484 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.743 00:20:46.743 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.743 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.743 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.003 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.003 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.003 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.003 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.003 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.003 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.003 { 00:20:47.003 "cntlid": 15, 00:20:47.003 "qid": 0, 00:20:47.003 "state": "enabled", 00:20:47.003 "thread": "nvmf_tgt_poll_group_000", 00:20:47.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:47.003 "listen_address": { 00:20:47.003 "trtype": "TCP", 00:20:47.003 "adrfam": "IPv4", 00:20:47.003 "traddr": "10.0.0.2", 00:20:47.003 "trsvcid": "4420" 00:20:47.003 }, 00:20:47.003 "peer_address": { 00:20:47.003 "trtype": "TCP", 00:20:47.003 "adrfam": "IPv4", 00:20:47.003 "traddr": "10.0.0.1", 00:20:47.003 "trsvcid": "43428" 00:20:47.003 }, 00:20:47.003 "auth": { 00:20:47.003 "state": "completed", 00:20:47.003 "digest": "sha256", 00:20:47.003 "dhgroup": "ffdhe2048" 00:20:47.003 } 00:20:47.003 } 00:20:47.003 ]' 00:20:47.003 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.003 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:47.003 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.262 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:47.262 14:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.262 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.262 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.262 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.262 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:20:47.262 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:20:47.831 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.090 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:48.090 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.090 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.090 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.090 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.090 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.090 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:48.090 14:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:48.090 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:48.090 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.090 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:48.090 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:48.090 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:48.090 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.090 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.090 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.090 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.090 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.090 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.090 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.090 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.350 00:20:48.609 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.609 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.609 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.609 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.609 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.609 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.609 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.609 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.609 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.609 { 00:20:48.609 "cntlid": 17, 00:20:48.609 "qid": 0, 00:20:48.609 "state": "enabled", 00:20:48.609 "thread": "nvmf_tgt_poll_group_000", 00:20:48.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:48.609 "listen_address": { 00:20:48.609 "trtype": "TCP", 00:20:48.609 "adrfam": "IPv4", 00:20:48.609 "traddr": "10.0.0.2", 00:20:48.609 "trsvcid": "4420" 00:20:48.609 }, 00:20:48.609 "peer_address": { 00:20:48.609 "trtype": "TCP", 00:20:48.609 "adrfam": "IPv4", 00:20:48.609 "traddr": "10.0.0.1", 00:20:48.609 "trsvcid": "43462" 00:20:48.609 }, 00:20:48.609 "auth": { 00:20:48.609 "state": "completed", 00:20:48.609 "digest": "sha256", 00:20:48.609 "dhgroup": "ffdhe3072" 00:20:48.609 } 00:20:48.609 } 00:20:48.609 ]' 00:20:48.609 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.609 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:48.609 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.868 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:48.868 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.869 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.869 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.869 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.128 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:20:49.128 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:20:49.696 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.697 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.956 00:20:50.215 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.215 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.215 14:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.215 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.215 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.215 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.215 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.215 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.215 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.215 { 00:20:50.215 "cntlid": 19, 00:20:50.215 "qid": 0, 00:20:50.215 "state": "enabled", 00:20:50.215 "thread": "nvmf_tgt_poll_group_000", 00:20:50.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:50.215 "listen_address": { 00:20:50.215 "trtype": "TCP", 00:20:50.215 "adrfam": "IPv4", 00:20:50.215 "traddr": "10.0.0.2", 00:20:50.215 "trsvcid": "4420" 00:20:50.215 }, 00:20:50.215 "peer_address": { 00:20:50.215 "trtype": "TCP", 00:20:50.215 "adrfam": "IPv4", 00:20:50.215 "traddr": "10.0.0.1", 00:20:50.215 "trsvcid": "43492" 00:20:50.215 }, 00:20:50.215 "auth": { 00:20:50.215 "state": "completed", 00:20:50.215 "digest": "sha256", 00:20:50.215 "dhgroup": "ffdhe3072" 00:20:50.215 } 00:20:50.215 } 00:20:50.215 ]' 00:20:50.215 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.474 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:50.474 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.474 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:50.474 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.474 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.474 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.474 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.732 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:20:50.732 14:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:20:51.298 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.298 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:51.298 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.298 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.298 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.298 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.298 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:51.298 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:51.556 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:51.556 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.556 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:51.556 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:51.556 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:51.556 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.556 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.556 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.556 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.556 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.556 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.556 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.556 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.815 00:20:51.815 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.815 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.815 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.815 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.815 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.815 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.815 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.815 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.815 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.815 { 00:20:51.815 "cntlid": 21, 00:20:51.815 "qid": 0, 00:20:51.815 "state": "enabled", 00:20:51.815 "thread": "nvmf_tgt_poll_group_000", 00:20:51.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:51.815 "listen_address": { 00:20:51.815 "trtype": "TCP", 00:20:51.815 "adrfam": "IPv4", 00:20:51.815 "traddr": "10.0.0.2", 00:20:51.815 "trsvcid": "4420" 00:20:51.815 }, 00:20:51.815 "peer_address": { 00:20:51.815 "trtype": "TCP", 00:20:51.815 "adrfam": "IPv4", 00:20:51.815 "traddr": "10.0.0.1", 00:20:51.815 "trsvcid": "43514" 00:20:51.815 }, 00:20:51.815 "auth": { 00:20:51.815 "state": "completed", 00:20:51.815 "digest": "sha256", 00:20:51.815 "dhgroup": "ffdhe3072" 00:20:51.815 } 00:20:51.815 } 00:20:51.815 ]' 00:20:51.815 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.074 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:52.074 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.074 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:52.074 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.074 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.074 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.074 14:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.333 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:20:52.333 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:20:52.901 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.901 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:52.901 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.901 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.901 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.901 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.901 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:52.901 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:53.160 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:53.160 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.160 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:53.160 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:53.160 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:53.160 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.160 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:53.160 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.160 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.160 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.160 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:53.160 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.160 14:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.160 00:20:53.419 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.419 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.419 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.419 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.419 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.419 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.420 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.420 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.420 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.420 { 00:20:53.420 "cntlid": 23, 00:20:53.420 "qid": 0, 00:20:53.420 "state": "enabled", 00:20:53.420 "thread": "nvmf_tgt_poll_group_000", 00:20:53.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:53.420 "listen_address": { 00:20:53.420 "trtype": "TCP", 00:20:53.420 "adrfam": "IPv4", 00:20:53.420 "traddr": "10.0.0.2", 00:20:53.420 "trsvcid": "4420" 00:20:53.420 }, 00:20:53.420 "peer_address": { 00:20:53.420 "trtype": "TCP", 00:20:53.420 "adrfam": "IPv4", 00:20:53.420 "traddr": "10.0.0.1", 00:20:53.420 "trsvcid": "47300" 00:20:53.420 }, 00:20:53.420 "auth": { 00:20:53.420 "state": "completed", 00:20:53.420 "digest": "sha256", 00:20:53.420 "dhgroup": "ffdhe3072" 00:20:53.420 } 00:20:53.420 } 00:20:53.420 ]' 00:20:53.420 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.420 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:53.679 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.679 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:53.679 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.679 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.679 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.679 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.938 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:20:53.938 14:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.506 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.765 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.765 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.024 00:20:55.024 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.024 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.024 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.024 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.024 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.024 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.024 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.024 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.024 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.024 { 00:20:55.024 "cntlid": 25, 00:20:55.024 "qid": 0, 00:20:55.024 "state": "enabled", 00:20:55.024 "thread": "nvmf_tgt_poll_group_000", 00:20:55.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:55.024 "listen_address": { 00:20:55.024 "trtype": "TCP", 00:20:55.024 "adrfam": "IPv4", 00:20:55.024 "traddr": "10.0.0.2", 00:20:55.024 "trsvcid": "4420" 00:20:55.024 }, 00:20:55.024 "peer_address": { 00:20:55.024 "trtype": "TCP", 00:20:55.024 "adrfam": "IPv4", 00:20:55.024 "traddr": "10.0.0.1", 00:20:55.024 "trsvcid": "47328" 00:20:55.024 }, 00:20:55.024 "auth": { 00:20:55.024 "state": "completed", 00:20:55.024 "digest": "sha256", 00:20:55.024 "dhgroup": "ffdhe4096" 00:20:55.024 } 00:20:55.024 } 00:20:55.024 ]' 00:20:55.024 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.283 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:55.283 14:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.283 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:55.283 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.283 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.283 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.283 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.542 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:20:55.542 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:20:56.110 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.110 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:56.110 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.110 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.110 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.110 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.110 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:56.111 14:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:56.111 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:56.111 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.111 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:56.111 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:56.111 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:56.111 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.111 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.111 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.111 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.370 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.370 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.370 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.370 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.629 00:20:56.629 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.629 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.629 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.629 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.629 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.629 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.629 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.629 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.629 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.629 { 00:20:56.629 "cntlid": 27, 00:20:56.629 "qid": 0, 00:20:56.629 "state": "enabled", 00:20:56.629 "thread": "nvmf_tgt_poll_group_000", 00:20:56.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:56.629 "listen_address": { 00:20:56.629 "trtype": "TCP", 00:20:56.629 "adrfam": "IPv4", 00:20:56.629 "traddr": "10.0.0.2", 00:20:56.629 "trsvcid": "4420" 00:20:56.629 }, 00:20:56.629 "peer_address": { 00:20:56.629 "trtype": "TCP", 00:20:56.629 "adrfam": "IPv4", 00:20:56.629 "traddr": "10.0.0.1", 00:20:56.629 "trsvcid": "47366" 00:20:56.629 }, 00:20:56.629 "auth": { 00:20:56.629 "state": "completed", 00:20:56.629 "digest": "sha256", 00:20:56.629 "dhgroup": "ffdhe4096" 00:20:56.629 } 00:20:56.629 } 00:20:56.629 ]' 00:20:56.888 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.888 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:56.888 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.888 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:56.888 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.888 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.888 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.888 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.148 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:20:57.148 14:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:20:57.714 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.714 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:57.714 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.714 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.714 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.714 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.714 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:57.714 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:57.974 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:57.974 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.974 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:57.974 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:57.974 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:57.974 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.974 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.974 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.974 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.974 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.974 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.974 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.974 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.233 00:20:58.233 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.233 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.233 14:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.491 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.491 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.491 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.491 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.491 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.491 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.491 { 00:20:58.491 "cntlid": 29, 00:20:58.491 "qid": 0, 00:20:58.491 "state": "enabled", 00:20:58.491 "thread": "nvmf_tgt_poll_group_000", 00:20:58.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:58.491 "listen_address": { 00:20:58.491 "trtype": "TCP", 00:20:58.491 "adrfam": "IPv4", 00:20:58.491 "traddr": "10.0.0.2", 00:20:58.491 "trsvcid": "4420" 00:20:58.491 }, 00:20:58.491 "peer_address": { 00:20:58.491 "trtype": "TCP", 00:20:58.491 "adrfam": "IPv4", 00:20:58.491 "traddr": "10.0.0.1", 00:20:58.491 "trsvcid": "47382" 00:20:58.491 }, 00:20:58.491 "auth": { 00:20:58.491 "state": "completed", 00:20:58.491 "digest": "sha256", 00:20:58.491 "dhgroup": "ffdhe4096" 00:20:58.491 } 00:20:58.491 } 00:20:58.492 ]' 00:20:58.492 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.492 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:58.492 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.492 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:58.492 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.492 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.492 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.492 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.750 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:20:58.750 14:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:20:59.317 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.317 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:59.317 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.317 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.317 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.317 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.317 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:59.317 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:59.576 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:59.576 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.576 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:59.576 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:59.576 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:59.576 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.576 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:59.576 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.576 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.576 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.576 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:59.576 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:59.576 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:59.835 00:20:59.835 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.835 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.835 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.094 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.094 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.094 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.094 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.094 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.094 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.094 { 00:21:00.094 "cntlid": 31, 00:21:00.094 "qid": 0, 00:21:00.094 "state": "enabled", 00:21:00.094 "thread": "nvmf_tgt_poll_group_000", 00:21:00.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:00.094 "listen_address": { 00:21:00.094 "trtype": "TCP", 00:21:00.094 "adrfam": "IPv4", 00:21:00.094 "traddr": "10.0.0.2", 00:21:00.094 "trsvcid": "4420" 00:21:00.094 }, 00:21:00.094 "peer_address": { 00:21:00.094 "trtype": "TCP", 00:21:00.094 "adrfam": "IPv4", 00:21:00.094 "traddr": "10.0.0.1", 00:21:00.094 "trsvcid": "47400" 00:21:00.094 }, 00:21:00.094 "auth": { 00:21:00.094 "state": "completed", 00:21:00.094 "digest": "sha256", 00:21:00.094 "dhgroup": "ffdhe4096" 00:21:00.094 } 00:21:00.094 } 00:21:00.094 ]' 00:21:00.094 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.094 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:00.094 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.094 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:00.094 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.094 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.094 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.094 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.353 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:21:00.353 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:21:00.921 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.921 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:00.921 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.921 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.921 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.921 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.921 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.921 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:00.921 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:01.180 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:21:01.180 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.180 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:01.180 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:01.180 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:01.180 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.181 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.181 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.181 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.181 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.181 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.181 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.181 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.439 00:21:01.439 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.439 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.439 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.698 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.698 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.698 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.698 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.698 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.698 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.698 { 00:21:01.698 "cntlid": 33, 00:21:01.698 "qid": 0, 00:21:01.698 "state": "enabled", 00:21:01.698 "thread": "nvmf_tgt_poll_group_000", 00:21:01.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:01.698 "listen_address": { 00:21:01.698 "trtype": "TCP", 00:21:01.698 "adrfam": "IPv4", 00:21:01.698 "traddr": "10.0.0.2", 00:21:01.698 "trsvcid": "4420" 00:21:01.698 }, 00:21:01.698 "peer_address": { 00:21:01.698 "trtype": "TCP", 00:21:01.698 "adrfam": "IPv4", 00:21:01.698 "traddr": "10.0.0.1", 00:21:01.698 "trsvcid": "47416" 00:21:01.698 }, 00:21:01.698 "auth": { 00:21:01.698 "state": "completed", 00:21:01.698 "digest": "sha256", 00:21:01.698 "dhgroup": "ffdhe6144" 00:21:01.698 } 00:21:01.698 } 00:21:01.698 ]' 00:21:01.698 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.698 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:01.698 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.698 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:01.698 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.957 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.957 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.957 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.957 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:21:01.957 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:21:02.527 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.527 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:02.527 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.527 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.527 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.527 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.527 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:02.527 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:02.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:21:02.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:02.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:02.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:02.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.046 00:21:03.305 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.305 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.305 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.305 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.305 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.305 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.305 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.305 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.305 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.305 { 00:21:03.305 "cntlid": 35, 00:21:03.305 "qid": 0, 00:21:03.305 "state": "enabled", 00:21:03.305 "thread": "nvmf_tgt_poll_group_000", 00:21:03.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:03.305 "listen_address": { 00:21:03.305 "trtype": "TCP", 00:21:03.305 "adrfam": "IPv4", 00:21:03.305 "traddr": "10.0.0.2", 00:21:03.305 "trsvcid": "4420" 00:21:03.305 }, 00:21:03.305 "peer_address": { 00:21:03.305 "trtype": "TCP", 00:21:03.305 "adrfam": "IPv4", 00:21:03.305 "traddr": "10.0.0.1", 00:21:03.305 "trsvcid": "54754" 00:21:03.305 }, 00:21:03.305 "auth": { 00:21:03.305 "state": "completed", 00:21:03.305 "digest": "sha256", 00:21:03.305 "dhgroup": "ffdhe6144" 00:21:03.305 } 00:21:03.305 } 00:21:03.305 ]' 00:21:03.305 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.564 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:03.564 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.564 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:03.564 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.564 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.564 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.564 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.823 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:21:03.823 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.391 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.961 00:21:04.961 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.961 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.961 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.961 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.961 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.961 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.961 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.961 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.961 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.961 { 00:21:04.961 "cntlid": 37, 00:21:04.961 "qid": 0, 00:21:04.961 "state": "enabled", 00:21:04.961 "thread": "nvmf_tgt_poll_group_000", 00:21:04.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:04.961 "listen_address": { 00:21:04.961 "trtype": "TCP", 00:21:04.961 "adrfam": "IPv4", 00:21:04.961 "traddr": "10.0.0.2", 00:21:04.961 "trsvcid": "4420" 00:21:04.961 }, 00:21:04.961 "peer_address": { 00:21:04.961 "trtype": "TCP", 00:21:04.961 "adrfam": "IPv4", 00:21:04.961 "traddr": "10.0.0.1", 00:21:04.961 "trsvcid": "54774" 00:21:04.961 }, 00:21:04.961 "auth": { 00:21:04.961 "state": "completed", 00:21:04.961 "digest": "sha256", 00:21:04.961 "dhgroup": "ffdhe6144" 00:21:04.961 } 00:21:04.961 } 00:21:04.961 ]' 00:21:04.961 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.220 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:05.220 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.220 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:05.220 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.220 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.220 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.220 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.479 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:21:05.480 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:21:06.047 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.048 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:06.048 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.048 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.048 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.048 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.048 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:06.048 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:06.048 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:21:06.048 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.048 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:06.048 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:06.048 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:06.048 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.048 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:06.048 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.048 14:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.307 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.307 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:06.307 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.307 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:06.566 00:21:06.566 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.566 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.566 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.825 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.825 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.825 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.825 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.825 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.825 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.825 { 00:21:06.825 "cntlid": 39, 00:21:06.825 "qid": 0, 00:21:06.825 "state": "enabled", 00:21:06.825 "thread": "nvmf_tgt_poll_group_000", 00:21:06.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:06.825 "listen_address": { 00:21:06.825 "trtype": "TCP", 00:21:06.825 "adrfam": "IPv4", 00:21:06.825 "traddr": "10.0.0.2", 00:21:06.825 "trsvcid": "4420" 00:21:06.825 }, 00:21:06.825 "peer_address": { 00:21:06.825 "trtype": "TCP", 00:21:06.825 "adrfam": "IPv4", 00:21:06.825 "traddr": "10.0.0.1", 00:21:06.825 "trsvcid": "54802" 00:21:06.825 }, 00:21:06.825 "auth": { 00:21:06.825 "state": "completed", 00:21:06.825 "digest": "sha256", 00:21:06.825 "dhgroup": "ffdhe6144" 00:21:06.825 } 00:21:06.825 } 00:21:06.825 ]' 00:21:06.825 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.825 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:06.825 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.825 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:06.825 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.825 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.825 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.825 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.084 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:21:07.084 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:21:07.667 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.667 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:07.667 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.667 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.667 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.667 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.667 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.667 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:07.668 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:07.926 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:21:07.926 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.926 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:07.926 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:07.926 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:07.926 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.927 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.927 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.927 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.927 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.927 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.927 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.927 14:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.495 00:21:08.495 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.495 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.495 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.495 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.495 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.495 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.495 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.495 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.495 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.495 { 00:21:08.495 "cntlid": 41, 00:21:08.495 "qid": 0, 00:21:08.495 "state": "enabled", 00:21:08.495 "thread": "nvmf_tgt_poll_group_000", 00:21:08.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:08.495 "listen_address": { 00:21:08.495 "trtype": "TCP", 00:21:08.495 "adrfam": "IPv4", 00:21:08.495 "traddr": "10.0.0.2", 00:21:08.495 "trsvcid": "4420" 00:21:08.495 }, 00:21:08.495 "peer_address": { 00:21:08.495 "trtype": "TCP", 00:21:08.495 "adrfam": "IPv4", 00:21:08.495 "traddr": "10.0.0.1", 00:21:08.495 "trsvcid": "54832" 00:21:08.495 }, 00:21:08.495 "auth": { 00:21:08.495 "state": "completed", 00:21:08.495 "digest": "sha256", 00:21:08.495 "dhgroup": "ffdhe8192" 00:21:08.495 } 00:21:08.495 } 00:21:08.495 ]' 00:21:08.495 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.754 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:08.754 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.754 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:08.754 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.754 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.754 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.754 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.013 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:21:09.013 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:21:09.582 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.582 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:09.582 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.582 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.582 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.582 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.582 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:09.582 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:09.582 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:21:09.582 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.582 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:09.582 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:09.582 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:09.582 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.583 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.583 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.583 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.583 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.583 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.583 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.583 14:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.150 00:21:10.150 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.150 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.150 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.409 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.409 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.409 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.409 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.409 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.409 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.409 { 00:21:10.409 "cntlid": 43, 00:21:10.409 "qid": 0, 00:21:10.409 "state": "enabled", 00:21:10.409 "thread": "nvmf_tgt_poll_group_000", 00:21:10.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:10.409 "listen_address": { 00:21:10.409 "trtype": "TCP", 00:21:10.409 "adrfam": "IPv4", 00:21:10.409 "traddr": "10.0.0.2", 00:21:10.409 "trsvcid": "4420" 00:21:10.409 }, 00:21:10.409 "peer_address": { 00:21:10.409 "trtype": "TCP", 00:21:10.409 "adrfam": "IPv4", 00:21:10.409 "traddr": "10.0.0.1", 00:21:10.409 "trsvcid": "54872" 00:21:10.409 }, 00:21:10.409 "auth": { 00:21:10.409 "state": "completed", 00:21:10.409 "digest": "sha256", 00:21:10.409 "dhgroup": "ffdhe8192" 00:21:10.409 } 00:21:10.409 } 00:21:10.409 ]' 00:21:10.409 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.409 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:10.409 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.409 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:10.409 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.409 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.409 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.409 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.667 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:21:10.667 14:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:21:11.234 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.234 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:11.234 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.234 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.234 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.234 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.234 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:11.234 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:11.493 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:21:11.493 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.493 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:11.493 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:11.493 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:11.493 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.493 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.493 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.493 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.493 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.493 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.493 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.493 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.060 00:21:12.060 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.060 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.060 14:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.318 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.318 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.319 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.319 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.319 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.319 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.319 { 00:21:12.319 "cntlid": 45, 00:21:12.319 "qid": 0, 00:21:12.319 "state": "enabled", 00:21:12.319 "thread": "nvmf_tgt_poll_group_000", 00:21:12.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:12.319 "listen_address": { 00:21:12.319 "trtype": "TCP", 00:21:12.319 "adrfam": "IPv4", 00:21:12.319 "traddr": "10.0.0.2", 00:21:12.319 "trsvcid": "4420" 00:21:12.319 }, 00:21:12.319 "peer_address": { 00:21:12.319 "trtype": "TCP", 00:21:12.319 "adrfam": "IPv4", 00:21:12.319 "traddr": "10.0.0.1", 00:21:12.319 "trsvcid": "54904" 00:21:12.319 }, 00:21:12.319 "auth": { 00:21:12.319 "state": "completed", 00:21:12.319 "digest": "sha256", 00:21:12.319 "dhgroup": "ffdhe8192" 00:21:12.319 } 00:21:12.319 } 00:21:12.319 ]' 00:21:12.319 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.319 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:12.319 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.319 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:12.319 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.319 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.319 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.319 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.577 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:21:12.577 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:21:13.145 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.145 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:13.145 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.145 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.145 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.145 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.145 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:13.145 14:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:13.404 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:21:13.404 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.404 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:13.404 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:13.404 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:13.404 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.404 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:13.404 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.404 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.404 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.404 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:13.404 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.404 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.972 00:21:13.972 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.972 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.972 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.972 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.972 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.972 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.972 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.972 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.972 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.972 { 00:21:13.972 "cntlid": 47, 00:21:13.972 "qid": 0, 00:21:13.972 "state": "enabled", 00:21:13.973 "thread": "nvmf_tgt_poll_group_000", 00:21:13.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:13.973 "listen_address": { 00:21:13.973 "trtype": "TCP", 00:21:13.973 "adrfam": "IPv4", 00:21:13.973 "traddr": "10.0.0.2", 00:21:13.973 "trsvcid": "4420" 00:21:13.973 }, 00:21:13.973 "peer_address": { 00:21:13.973 "trtype": "TCP", 00:21:13.973 "adrfam": "IPv4", 00:21:13.973 "traddr": "10.0.0.1", 00:21:13.973 "trsvcid": "58738" 00:21:13.973 }, 00:21:13.973 "auth": { 00:21:13.973 "state": "completed", 00:21:13.973 "digest": "sha256", 00:21:13.973 "dhgroup": "ffdhe8192" 00:21:13.973 } 00:21:13.973 } 00:21:13.973 ]' 00:21:13.973 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.973 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:13.973 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.231 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:14.231 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.231 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.231 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.231 14:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.231 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:21:14.231 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:21:14.799 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.799 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:14.799 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.799 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.058 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.058 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:15.058 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.058 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.058 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:15.058 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:15.058 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:21:15.058 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.058 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:15.058 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:15.058 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:15.058 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.058 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.058 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.058 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.058 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.058 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.058 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.058 14:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.316 00:21:15.316 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.316 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.316 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.594 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.594 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.594 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.594 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.594 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.594 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.594 { 00:21:15.594 "cntlid": 49, 00:21:15.594 "qid": 0, 00:21:15.594 "state": "enabled", 00:21:15.594 "thread": "nvmf_tgt_poll_group_000", 00:21:15.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:15.594 "listen_address": { 00:21:15.594 "trtype": "TCP", 00:21:15.594 "adrfam": "IPv4", 00:21:15.594 "traddr": "10.0.0.2", 00:21:15.594 "trsvcid": "4420" 00:21:15.594 }, 00:21:15.594 "peer_address": { 00:21:15.594 "trtype": "TCP", 00:21:15.594 "adrfam": "IPv4", 00:21:15.594 "traddr": "10.0.0.1", 00:21:15.594 "trsvcid": "58766" 00:21:15.594 }, 00:21:15.594 "auth": { 00:21:15.594 "state": "completed", 00:21:15.594 "digest": "sha384", 00:21:15.594 "dhgroup": "null" 00:21:15.594 } 00:21:15.594 } 00:21:15.594 ]' 00:21:15.594 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.594 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.594 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.594 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:15.594 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.859 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.859 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.859 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.859 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:21:15.859 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:21:16.426 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.426 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.426 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.426 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.426 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.426 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.426 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:16.426 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:16.686 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:21:16.686 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.686 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:16.686 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:16.686 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:16.686 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.686 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.686 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.686 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.686 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.686 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.686 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.686 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.944 00:21:16.944 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.944 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.944 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.203 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.203 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.203 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.203 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.203 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.203 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.203 { 00:21:17.203 "cntlid": 51, 00:21:17.203 "qid": 0, 00:21:17.203 "state": "enabled", 00:21:17.203 "thread": "nvmf_tgt_poll_group_000", 00:21:17.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:17.203 "listen_address": { 00:21:17.203 "trtype": "TCP", 00:21:17.203 "adrfam": "IPv4", 00:21:17.203 "traddr": "10.0.0.2", 00:21:17.203 "trsvcid": "4420" 00:21:17.203 }, 00:21:17.203 "peer_address": { 00:21:17.203 "trtype": "TCP", 00:21:17.203 "adrfam": "IPv4", 00:21:17.203 "traddr": "10.0.0.1", 00:21:17.203 "trsvcid": "58796" 00:21:17.203 }, 00:21:17.203 "auth": { 00:21:17.203 "state": "completed", 00:21:17.203 "digest": "sha384", 00:21:17.203 "dhgroup": "null" 00:21:17.203 } 00:21:17.203 } 00:21:17.203 ]' 00:21:17.203 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.203 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.203 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.203 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:17.203 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.469 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.469 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.469 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.470 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:21:17.470 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:21:18.038 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.038 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:18.038 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.039 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.039 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.039 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.039 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:18.039 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:18.297 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:21:18.297 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.297 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:18.297 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:18.297 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:18.297 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.297 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.297 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.297 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.297 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.297 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.297 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.297 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.556 00:21:18.556 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.556 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.556 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.815 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.815 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.815 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.815 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.815 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.815 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.815 { 00:21:18.815 "cntlid": 53, 00:21:18.815 "qid": 0, 00:21:18.815 "state": "enabled", 00:21:18.815 "thread": "nvmf_tgt_poll_group_000", 00:21:18.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:18.815 "listen_address": { 00:21:18.815 "trtype": "TCP", 00:21:18.815 "adrfam": "IPv4", 00:21:18.815 "traddr": "10.0.0.2", 00:21:18.815 "trsvcid": "4420" 00:21:18.815 }, 00:21:18.815 "peer_address": { 00:21:18.815 "trtype": "TCP", 00:21:18.815 "adrfam": "IPv4", 00:21:18.815 "traddr": "10.0.0.1", 00:21:18.815 "trsvcid": "58826" 00:21:18.815 }, 00:21:18.815 "auth": { 00:21:18.815 "state": "completed", 00:21:18.815 "digest": "sha384", 00:21:18.815 "dhgroup": "null" 00:21:18.815 } 00:21:18.815 } 00:21:18.815 ]' 00:21:18.815 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.815 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.815 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.815 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:18.815 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.074 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.074 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.074 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.074 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:21:19.074 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:21:19.642 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.642 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:19.642 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.642 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.642 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.642 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.642 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:19.642 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:19.901 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:21:19.901 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.901 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:19.901 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:19.901 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:19.901 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.901 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:19.901 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.901 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.901 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.901 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:19.901 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.901 14:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.160 00:21:20.160 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.160 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.160 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.420 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.420 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.420 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.420 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.420 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.420 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.420 { 00:21:20.420 "cntlid": 55, 00:21:20.420 "qid": 0, 00:21:20.420 "state": "enabled", 00:21:20.420 "thread": "nvmf_tgt_poll_group_000", 00:21:20.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:20.420 "listen_address": { 00:21:20.420 "trtype": "TCP", 00:21:20.420 "adrfam": "IPv4", 00:21:20.420 "traddr": "10.0.0.2", 00:21:20.420 "trsvcid": "4420" 00:21:20.420 }, 00:21:20.420 "peer_address": { 00:21:20.420 "trtype": "TCP", 00:21:20.420 "adrfam": "IPv4", 00:21:20.420 "traddr": "10.0.0.1", 00:21:20.420 "trsvcid": "58852" 00:21:20.420 }, 00:21:20.420 "auth": { 00:21:20.420 "state": "completed", 00:21:20.420 "digest": "sha384", 00:21:20.420 "dhgroup": "null" 00:21:20.420 } 00:21:20.420 } 00:21:20.420 ]' 00:21:20.420 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.420 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.420 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.420 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:20.420 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.680 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.680 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.680 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.680 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:21:20.680 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:21:21.247 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.247 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.247 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.247 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.506 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.506 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:21.506 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.506 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:21.506 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:21.506 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:21.506 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.506 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:21.506 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:21.506 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:21.506 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.506 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.506 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.506 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.506 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.506 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.506 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.506 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.765 00:21:21.765 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.765 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.765 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.024 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.024 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.024 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.024 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.024 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.024 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.024 { 00:21:22.024 "cntlid": 57, 00:21:22.024 "qid": 0, 00:21:22.024 "state": "enabled", 00:21:22.024 "thread": "nvmf_tgt_poll_group_000", 00:21:22.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:22.024 "listen_address": { 00:21:22.024 "trtype": "TCP", 00:21:22.024 "adrfam": "IPv4", 00:21:22.024 "traddr": "10.0.0.2", 00:21:22.024 "trsvcid": "4420" 00:21:22.024 }, 00:21:22.024 "peer_address": { 00:21:22.024 "trtype": "TCP", 00:21:22.024 "adrfam": "IPv4", 00:21:22.024 "traddr": "10.0.0.1", 00:21:22.024 "trsvcid": "58886" 00:21:22.024 }, 00:21:22.024 "auth": { 00:21:22.024 "state": "completed", 00:21:22.024 "digest": "sha384", 00:21:22.024 "dhgroup": "ffdhe2048" 00:21:22.024 } 00:21:22.024 } 00:21:22.024 ]' 00:21:22.025 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.025 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.025 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.025 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:22.025 14:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.283 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.283 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.283 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.283 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:21:22.283 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:21:22.850 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.850 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:22.850 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.850 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.850 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.109 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.109 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:23.109 14:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:23.109 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:23.109 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.109 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:23.109 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:23.109 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:23.109 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.109 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.109 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.109 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.109 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.109 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.109 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.109 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.368 00:21:23.368 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.368 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.368 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.627 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.627 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.627 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.627 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.627 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.627 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.627 { 00:21:23.627 "cntlid": 59, 00:21:23.627 "qid": 0, 00:21:23.627 "state": "enabled", 00:21:23.627 "thread": "nvmf_tgt_poll_group_000", 00:21:23.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:23.627 "listen_address": { 00:21:23.627 "trtype": "TCP", 00:21:23.627 "adrfam": "IPv4", 00:21:23.627 "traddr": "10.0.0.2", 00:21:23.627 "trsvcid": "4420" 00:21:23.627 }, 00:21:23.627 "peer_address": { 00:21:23.627 "trtype": "TCP", 00:21:23.627 "adrfam": "IPv4", 00:21:23.627 "traddr": "10.0.0.1", 00:21:23.627 "trsvcid": "48402" 00:21:23.627 }, 00:21:23.627 "auth": { 00:21:23.627 "state": "completed", 00:21:23.627 "digest": "sha384", 00:21:23.627 "dhgroup": "ffdhe2048" 00:21:23.627 } 00:21:23.627 } 00:21:23.627 ]' 00:21:23.627 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.627 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.627 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.627 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:23.887 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.887 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.887 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.887 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.887 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:21:23.887 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:21:24.454 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.713 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.972 00:21:24.972 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.972 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.972 14:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.231 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.231 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.231 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.231 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.231 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.231 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.231 { 00:21:25.231 "cntlid": 61, 00:21:25.231 "qid": 0, 00:21:25.231 "state": "enabled", 00:21:25.231 "thread": "nvmf_tgt_poll_group_000", 00:21:25.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:25.231 "listen_address": { 00:21:25.231 "trtype": "TCP", 00:21:25.231 "adrfam": "IPv4", 00:21:25.231 "traddr": "10.0.0.2", 00:21:25.231 "trsvcid": "4420" 00:21:25.231 }, 00:21:25.231 "peer_address": { 00:21:25.231 "trtype": "TCP", 00:21:25.231 "adrfam": "IPv4", 00:21:25.231 "traddr": "10.0.0.1", 00:21:25.231 "trsvcid": "48416" 00:21:25.231 }, 00:21:25.231 "auth": { 00:21:25.231 "state": "completed", 00:21:25.231 "digest": "sha384", 00:21:25.231 "dhgroup": "ffdhe2048" 00:21:25.231 } 00:21:25.231 } 00:21:25.231 ]' 00:21:25.231 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.231 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:25.231 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.231 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:25.231 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.490 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.491 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.491 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.491 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:21:25.491 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:21:26.059 14:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.059 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:26.059 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.059 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.318 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.318 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.318 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:26.318 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:26.318 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:26.318 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.318 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:26.318 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:26.318 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:26.318 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.318 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:26.318 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.318 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.318 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.318 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.318 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.318 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.577 00:21:26.577 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.577 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.577 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.836 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.836 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.836 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.836 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.836 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.836 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.836 { 00:21:26.836 "cntlid": 63, 00:21:26.836 "qid": 0, 00:21:26.836 "state": "enabled", 00:21:26.836 "thread": "nvmf_tgt_poll_group_000", 00:21:26.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:26.836 "listen_address": { 00:21:26.836 "trtype": "TCP", 00:21:26.836 "adrfam": "IPv4", 00:21:26.836 "traddr": "10.0.0.2", 00:21:26.836 "trsvcid": "4420" 00:21:26.836 }, 00:21:26.836 "peer_address": { 00:21:26.836 "trtype": "TCP", 00:21:26.836 "adrfam": "IPv4", 00:21:26.836 "traddr": "10.0.0.1", 00:21:26.836 "trsvcid": "48452" 00:21:26.836 }, 00:21:26.836 "auth": { 00:21:26.836 "state": "completed", 00:21:26.836 "digest": "sha384", 00:21:26.836 "dhgroup": "ffdhe2048" 00:21:26.836 } 00:21:26.836 } 00:21:26.836 ]' 00:21:26.836 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.836 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.836 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.837 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:26.837 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.096 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.096 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.096 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.096 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:21:27.096 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:21:27.664 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.664 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:27.664 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.664 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.664 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.664 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:27.664 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.664 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:27.664 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:27.924 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:27.924 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.924 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:27.924 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:27.924 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:27.924 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.924 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.924 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.924 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.924 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.924 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.924 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.924 14:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.183 00:21:28.183 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.183 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.183 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.442 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.442 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.442 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.442 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.442 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.442 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.442 { 00:21:28.442 "cntlid": 65, 00:21:28.442 "qid": 0, 00:21:28.442 "state": "enabled", 00:21:28.442 "thread": "nvmf_tgt_poll_group_000", 00:21:28.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:28.442 "listen_address": { 00:21:28.442 "trtype": "TCP", 00:21:28.442 "adrfam": "IPv4", 00:21:28.442 "traddr": "10.0.0.2", 00:21:28.442 "trsvcid": "4420" 00:21:28.442 }, 00:21:28.442 "peer_address": { 00:21:28.442 "trtype": "TCP", 00:21:28.442 "adrfam": "IPv4", 00:21:28.442 "traddr": "10.0.0.1", 00:21:28.442 "trsvcid": "48470" 00:21:28.442 }, 00:21:28.442 "auth": { 00:21:28.442 "state": "completed", 00:21:28.442 "digest": "sha384", 00:21:28.442 "dhgroup": "ffdhe3072" 00:21:28.442 } 00:21:28.442 } 00:21:28.442 ]' 00:21:28.442 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.442 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:28.442 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.442 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:28.442 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.701 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.701 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.701 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.701 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:21:28.701 14:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:21:29.268 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.268 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:29.268 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.268 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.527 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.527 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.527 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:29.527 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:29.527 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:29.527 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.527 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:29.527 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:29.527 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:29.527 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.527 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.527 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.527 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.527 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.527 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.527 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.527 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.785 00:21:29.786 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.786 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.786 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.044 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.044 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.044 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.044 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.044 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.044 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.044 { 00:21:30.044 "cntlid": 67, 00:21:30.044 "qid": 0, 00:21:30.044 "state": "enabled", 00:21:30.044 "thread": "nvmf_tgt_poll_group_000", 00:21:30.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:30.044 "listen_address": { 00:21:30.044 "trtype": "TCP", 00:21:30.044 "adrfam": "IPv4", 00:21:30.044 "traddr": "10.0.0.2", 00:21:30.044 "trsvcid": "4420" 00:21:30.044 }, 00:21:30.044 "peer_address": { 00:21:30.044 "trtype": "TCP", 00:21:30.044 "adrfam": "IPv4", 00:21:30.044 "traddr": "10.0.0.1", 00:21:30.044 "trsvcid": "48492" 00:21:30.044 }, 00:21:30.044 "auth": { 00:21:30.044 "state": "completed", 00:21:30.044 "digest": "sha384", 00:21:30.044 "dhgroup": "ffdhe3072" 00:21:30.044 } 00:21:30.044 } 00:21:30.044 ]' 00:21:30.044 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.044 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:30.044 14:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.303 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:30.303 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.303 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.303 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.303 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.303 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:21:30.303 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:21:30.870 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.870 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:30.870 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.870 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.129 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.129 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.129 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:31.129 14:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:31.129 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:31.129 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.129 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:31.129 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:31.129 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:31.129 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.129 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.129 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.129 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.129 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.129 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.129 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.129 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.388 00:21:31.388 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.388 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.388 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.647 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.647 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.647 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.647 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.647 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.647 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.647 { 00:21:31.647 "cntlid": 69, 00:21:31.647 "qid": 0, 00:21:31.647 "state": "enabled", 00:21:31.647 "thread": "nvmf_tgt_poll_group_000", 00:21:31.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:31.647 "listen_address": { 00:21:31.647 "trtype": "TCP", 00:21:31.647 "adrfam": "IPv4", 00:21:31.647 "traddr": "10.0.0.2", 00:21:31.647 "trsvcid": "4420" 00:21:31.647 }, 00:21:31.647 "peer_address": { 00:21:31.647 "trtype": "TCP", 00:21:31.647 "adrfam": "IPv4", 00:21:31.647 "traddr": "10.0.0.1", 00:21:31.647 "trsvcid": "48520" 00:21:31.647 }, 00:21:31.647 "auth": { 00:21:31.647 "state": "completed", 00:21:31.647 "digest": "sha384", 00:21:31.647 "dhgroup": "ffdhe3072" 00:21:31.647 } 00:21:31.647 } 00:21:31.647 ]' 00:21:31.647 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.647 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.647 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.647 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:31.647 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.906 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.906 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.906 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.906 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:21:31.906 14:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:32.843 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.844 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.103 00:21:33.103 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.103 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.103 14:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.361 14:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.361 14:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.361 14:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.361 14:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.361 14:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.361 14:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.361 { 00:21:33.361 "cntlid": 71, 00:21:33.361 "qid": 0, 00:21:33.361 "state": "enabled", 00:21:33.362 "thread": "nvmf_tgt_poll_group_000", 00:21:33.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:33.362 "listen_address": { 00:21:33.362 "trtype": "TCP", 00:21:33.362 "adrfam": "IPv4", 00:21:33.362 "traddr": "10.0.0.2", 00:21:33.362 "trsvcid": "4420" 00:21:33.362 }, 00:21:33.362 "peer_address": { 00:21:33.362 "trtype": "TCP", 00:21:33.362 "adrfam": "IPv4", 00:21:33.362 "traddr": "10.0.0.1", 00:21:33.362 "trsvcid": "59010" 00:21:33.362 }, 00:21:33.362 "auth": { 00:21:33.362 "state": "completed", 00:21:33.362 "digest": "sha384", 00:21:33.362 "dhgroup": "ffdhe3072" 00:21:33.362 } 00:21:33.362 } 00:21:33.362 ]' 00:21:33.362 14:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.362 14:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.362 14:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.362 14:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:33.362 14:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.362 14:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.362 14:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.362 14:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.623 14:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:21:33.623 14:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:21:34.190 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.190 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.190 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.190 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.190 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.190 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:34.190 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.190 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:34.190 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:34.449 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:34.449 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.449 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:34.449 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:34.449 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:34.449 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.449 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.449 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.449 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.449 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.449 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.449 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.449 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.707 00:21:34.707 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.707 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.707 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.966 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.966 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.966 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.966 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.966 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.966 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.966 { 00:21:34.966 "cntlid": 73, 00:21:34.966 "qid": 0, 00:21:34.966 "state": "enabled", 00:21:34.966 "thread": "nvmf_tgt_poll_group_000", 00:21:34.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:34.966 "listen_address": { 00:21:34.966 "trtype": "TCP", 00:21:34.966 "adrfam": "IPv4", 00:21:34.966 "traddr": "10.0.0.2", 00:21:34.966 "trsvcid": "4420" 00:21:34.966 }, 00:21:34.966 "peer_address": { 00:21:34.966 "trtype": "TCP", 00:21:34.966 "adrfam": "IPv4", 00:21:34.966 "traddr": "10.0.0.1", 00:21:34.966 "trsvcid": "59040" 00:21:34.966 }, 00:21:34.966 "auth": { 00:21:34.966 "state": "completed", 00:21:34.966 "digest": "sha384", 00:21:34.966 "dhgroup": "ffdhe4096" 00:21:34.966 } 00:21:34.966 } 00:21:34.966 ]' 00:21:34.966 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.966 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:34.966 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.966 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:34.966 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.966 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.966 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.966 14:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.225 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:21:35.225 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:21:35.793 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.793 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:35.793 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.793 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.793 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.793 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.793 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:35.793 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:36.052 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:36.052 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.052 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:36.052 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:36.052 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:36.052 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.052 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.052 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.052 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.052 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.052 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.052 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.052 14:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.311 00:21:36.311 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.311 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.311 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.570 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.570 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.570 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.570 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.570 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.570 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.570 { 00:21:36.570 "cntlid": 75, 00:21:36.570 "qid": 0, 00:21:36.570 "state": "enabled", 00:21:36.570 "thread": "nvmf_tgt_poll_group_000", 00:21:36.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:36.570 "listen_address": { 00:21:36.570 "trtype": "TCP", 00:21:36.570 "adrfam": "IPv4", 00:21:36.570 "traddr": "10.0.0.2", 00:21:36.570 "trsvcid": "4420" 00:21:36.570 }, 00:21:36.570 "peer_address": { 00:21:36.570 "trtype": "TCP", 00:21:36.570 "adrfam": "IPv4", 00:21:36.570 "traddr": "10.0.0.1", 00:21:36.570 "trsvcid": "59066" 00:21:36.570 }, 00:21:36.570 "auth": { 00:21:36.570 "state": "completed", 00:21:36.570 "digest": "sha384", 00:21:36.570 "dhgroup": "ffdhe4096" 00:21:36.570 } 00:21:36.570 } 00:21:36.570 ]' 00:21:36.570 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.570 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:36.570 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.570 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:36.570 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.570 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.570 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.570 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.829 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:21:36.829 14:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:21:37.428 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.428 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:37.428 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.428 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.428 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.428 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.428 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:37.428 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:37.739 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:37.739 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.739 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:37.739 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:37.739 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:37.739 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.739 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.739 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.739 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.739 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.739 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.739 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.739 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.013 00:21:38.013 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.013 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.013 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.013 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.013 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.013 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.013 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.013 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.013 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.013 { 00:21:38.013 "cntlid": 77, 00:21:38.013 "qid": 0, 00:21:38.013 "state": "enabled", 00:21:38.013 "thread": "nvmf_tgt_poll_group_000", 00:21:38.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:38.013 "listen_address": { 00:21:38.013 "trtype": "TCP", 00:21:38.013 "adrfam": "IPv4", 00:21:38.013 "traddr": "10.0.0.2", 00:21:38.013 "trsvcid": "4420" 00:21:38.013 }, 00:21:38.013 "peer_address": { 00:21:38.013 "trtype": "TCP", 00:21:38.013 "adrfam": "IPv4", 00:21:38.013 "traddr": "10.0.0.1", 00:21:38.013 "trsvcid": "59088" 00:21:38.013 }, 00:21:38.013 "auth": { 00:21:38.013 "state": "completed", 00:21:38.013 "digest": "sha384", 00:21:38.013 "dhgroup": "ffdhe4096" 00:21:38.013 } 00:21:38.013 } 00:21:38.013 ]' 00:21:38.013 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.304 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.304 14:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.304 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:38.304 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.304 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.304 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.304 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.562 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:21:38.562 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:21:39.128 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.128 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:39.128 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.128 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.128 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.128 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.128 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:39.128 14:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:39.386 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:39.386 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.386 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:39.386 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:39.386 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:39.386 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.386 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:39.386 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.386 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.386 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.386 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:39.386 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.386 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.644 00:21:39.644 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.644 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.644 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.644 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.644 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.644 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.644 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.915 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.915 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.915 { 00:21:39.915 "cntlid": 79, 00:21:39.915 "qid": 0, 00:21:39.915 "state": "enabled", 00:21:39.915 "thread": "nvmf_tgt_poll_group_000", 00:21:39.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:39.915 "listen_address": { 00:21:39.915 "trtype": "TCP", 00:21:39.915 "adrfam": "IPv4", 00:21:39.915 "traddr": "10.0.0.2", 00:21:39.915 "trsvcid": "4420" 00:21:39.915 }, 00:21:39.915 "peer_address": { 00:21:39.915 "trtype": "TCP", 00:21:39.915 "adrfam": "IPv4", 00:21:39.915 "traddr": "10.0.0.1", 00:21:39.915 "trsvcid": "59126" 00:21:39.915 }, 00:21:39.915 "auth": { 00:21:39.915 "state": "completed", 00:21:39.915 "digest": "sha384", 00:21:39.915 "dhgroup": "ffdhe4096" 00:21:39.915 } 00:21:39.915 } 00:21:39.915 ]' 00:21:39.915 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.915 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:39.915 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.915 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:39.915 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.915 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.915 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.915 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.174 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:21:40.174 14:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:21:40.740 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.740 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:40.740 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.740 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.740 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.740 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.740 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.740 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:40.740 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:40.999 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:40.999 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.999 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:40.999 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:40.999 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:40.999 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.999 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.999 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.999 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.999 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.999 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.999 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.999 14:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.258 00:21:41.258 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.258 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.258 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.517 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.517 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.517 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.517 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.517 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.517 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.517 { 00:21:41.517 "cntlid": 81, 00:21:41.517 "qid": 0, 00:21:41.517 "state": "enabled", 00:21:41.517 "thread": "nvmf_tgt_poll_group_000", 00:21:41.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:41.517 "listen_address": { 00:21:41.517 "trtype": "TCP", 00:21:41.517 "adrfam": "IPv4", 00:21:41.517 "traddr": "10.0.0.2", 00:21:41.517 "trsvcid": "4420" 00:21:41.517 }, 00:21:41.517 "peer_address": { 00:21:41.517 "trtype": "TCP", 00:21:41.517 "adrfam": "IPv4", 00:21:41.517 "traddr": "10.0.0.1", 00:21:41.517 "trsvcid": "59152" 00:21:41.517 }, 00:21:41.517 "auth": { 00:21:41.517 "state": "completed", 00:21:41.517 "digest": "sha384", 00:21:41.517 "dhgroup": "ffdhe6144" 00:21:41.517 } 00:21:41.517 } 00:21:41.517 ]' 00:21:41.517 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.517 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:41.517 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.517 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:41.517 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.517 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.517 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.517 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.776 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:21:41.776 14:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:21:42.344 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.345 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:42.345 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.345 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.345 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.345 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.345 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:42.345 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:42.603 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:42.603 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.604 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:42.604 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:42.604 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:42.604 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.604 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.604 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.604 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.604 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.604 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.604 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.604 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.862 00:21:42.862 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.862 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.862 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.121 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.121 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.121 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.121 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.121 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.121 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.121 { 00:21:43.121 "cntlid": 83, 00:21:43.121 "qid": 0, 00:21:43.121 "state": "enabled", 00:21:43.121 "thread": "nvmf_tgt_poll_group_000", 00:21:43.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:43.121 "listen_address": { 00:21:43.121 "trtype": "TCP", 00:21:43.121 "adrfam": "IPv4", 00:21:43.121 "traddr": "10.0.0.2", 00:21:43.121 "trsvcid": "4420" 00:21:43.121 }, 00:21:43.121 "peer_address": { 00:21:43.121 "trtype": "TCP", 00:21:43.121 "adrfam": "IPv4", 00:21:43.121 "traddr": "10.0.0.1", 00:21:43.121 "trsvcid": "36842" 00:21:43.121 }, 00:21:43.121 "auth": { 00:21:43.121 "state": "completed", 00:21:43.121 "digest": "sha384", 00:21:43.121 "dhgroup": "ffdhe6144" 00:21:43.121 } 00:21:43.121 } 00:21:43.121 ]' 00:21:43.121 14:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.121 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:43.121 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.121 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:43.121 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.381 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.381 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.381 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.381 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:21:43.381 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:21:43.949 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.949 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:43.949 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.949 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.949 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.949 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.949 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:43.949 14:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:44.208 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:44.208 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.208 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:44.208 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:44.208 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:44.208 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.208 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.208 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.208 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.208 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.208 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.208 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.208 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.467 00:21:44.467 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.467 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.467 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.726 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.726 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.726 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.726 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.726 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.726 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.726 { 00:21:44.726 "cntlid": 85, 00:21:44.726 "qid": 0, 00:21:44.726 "state": "enabled", 00:21:44.726 "thread": "nvmf_tgt_poll_group_000", 00:21:44.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:44.726 "listen_address": { 00:21:44.726 "trtype": "TCP", 00:21:44.726 "adrfam": "IPv4", 00:21:44.726 "traddr": "10.0.0.2", 00:21:44.726 "trsvcid": "4420" 00:21:44.726 }, 00:21:44.726 "peer_address": { 00:21:44.726 "trtype": "TCP", 00:21:44.726 "adrfam": "IPv4", 00:21:44.726 "traddr": "10.0.0.1", 00:21:44.726 "trsvcid": "36862" 00:21:44.726 }, 00:21:44.726 "auth": { 00:21:44.726 "state": "completed", 00:21:44.726 "digest": "sha384", 00:21:44.726 "dhgroup": "ffdhe6144" 00:21:44.726 } 00:21:44.726 } 00:21:44.726 ]' 00:21:44.726 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.726 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:44.726 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.986 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:44.986 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.986 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.986 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.986 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.246 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:21:45.246 14:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.814 14:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.384 00:21:46.384 14:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.384 14:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.384 14:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.384 14:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.384 14:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.384 14:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.384 14:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.384 14:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.384 14:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.384 { 00:21:46.384 "cntlid": 87, 00:21:46.384 "qid": 0, 00:21:46.384 "state": "enabled", 00:21:46.384 "thread": "nvmf_tgt_poll_group_000", 00:21:46.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:46.384 "listen_address": { 00:21:46.384 "trtype": "TCP", 00:21:46.384 "adrfam": "IPv4", 00:21:46.384 "traddr": "10.0.0.2", 00:21:46.384 "trsvcid": "4420" 00:21:46.384 }, 00:21:46.384 "peer_address": { 00:21:46.384 "trtype": "TCP", 00:21:46.384 "adrfam": "IPv4", 00:21:46.384 "traddr": "10.0.0.1", 00:21:46.384 "trsvcid": "36892" 00:21:46.384 }, 00:21:46.384 "auth": { 00:21:46.384 "state": "completed", 00:21:46.384 "digest": "sha384", 00:21:46.384 "dhgroup": "ffdhe6144" 00:21:46.384 } 00:21:46.384 } 00:21:46.384 ]' 00:21:46.384 14:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.644 14:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:46.644 14:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.644 14:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:46.644 14:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.644 14:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.644 14:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.644 14:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.903 14:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:21:46.903 14:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.471 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.040 00:21:48.040 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.040 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.040 14:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.299 14:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.299 14:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.299 14:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.299 14:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.299 14:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.299 14:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.299 { 00:21:48.299 "cntlid": 89, 00:21:48.299 "qid": 0, 00:21:48.299 "state": "enabled", 00:21:48.299 "thread": "nvmf_tgt_poll_group_000", 00:21:48.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:48.299 "listen_address": { 00:21:48.299 "trtype": "TCP", 00:21:48.299 "adrfam": "IPv4", 00:21:48.299 "traddr": "10.0.0.2", 00:21:48.299 "trsvcid": "4420" 00:21:48.299 }, 00:21:48.299 "peer_address": { 00:21:48.299 "trtype": "TCP", 00:21:48.299 "adrfam": "IPv4", 00:21:48.299 "traddr": "10.0.0.1", 00:21:48.299 "trsvcid": "36926" 00:21:48.299 }, 00:21:48.299 "auth": { 00:21:48.299 "state": "completed", 00:21:48.299 "digest": "sha384", 00:21:48.299 "dhgroup": "ffdhe8192" 00:21:48.299 } 00:21:48.299 } 00:21:48.299 ]' 00:21:48.299 14:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.299 14:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:48.299 14:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.299 14:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:48.299 14:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.299 14:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.299 14:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.299 14:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.558 14:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:21:48.558 14:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:21:49.126 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.126 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:49.126 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.126 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.126 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.126 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.126 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:49.126 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:49.384 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:49.384 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.384 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:49.384 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:49.384 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:49.384 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.384 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.384 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.384 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.384 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.384 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.385 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.385 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.953 00:21:49.953 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.953 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.953 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.213 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.213 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.213 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.213 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.213 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.213 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.213 { 00:21:50.213 "cntlid": 91, 00:21:50.213 "qid": 0, 00:21:50.213 "state": "enabled", 00:21:50.213 "thread": "nvmf_tgt_poll_group_000", 00:21:50.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:50.213 "listen_address": { 00:21:50.213 "trtype": "TCP", 00:21:50.213 "adrfam": "IPv4", 00:21:50.213 "traddr": "10.0.0.2", 00:21:50.213 "trsvcid": "4420" 00:21:50.213 }, 00:21:50.213 "peer_address": { 00:21:50.213 "trtype": "TCP", 00:21:50.213 "adrfam": "IPv4", 00:21:50.213 "traddr": "10.0.0.1", 00:21:50.213 "trsvcid": "36944" 00:21:50.213 }, 00:21:50.213 "auth": { 00:21:50.213 "state": "completed", 00:21:50.213 "digest": "sha384", 00:21:50.213 "dhgroup": "ffdhe8192" 00:21:50.213 } 00:21:50.213 } 00:21:50.213 ]' 00:21:50.213 14:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.213 14:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:50.213 14:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.213 14:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:50.213 14:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.213 14:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.213 14:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.213 14:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.472 14:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:21:50.472 14:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:21:51.040 14:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.041 14:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.041 14:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.041 14:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.041 14:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.041 14:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.041 14:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:51.041 14:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:51.300 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:51.300 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.300 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:51.300 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:51.300 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:51.300 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.300 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.300 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.300 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.300 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.300 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.300 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.300 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.867 00:21:51.867 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.867 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.867 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.126 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.126 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.126 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.126 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.126 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.126 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.126 { 00:21:52.126 "cntlid": 93, 00:21:52.126 "qid": 0, 00:21:52.126 "state": "enabled", 00:21:52.126 "thread": "nvmf_tgt_poll_group_000", 00:21:52.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:52.126 "listen_address": { 00:21:52.126 "trtype": "TCP", 00:21:52.126 "adrfam": "IPv4", 00:21:52.126 "traddr": "10.0.0.2", 00:21:52.126 "trsvcid": "4420" 00:21:52.126 }, 00:21:52.126 "peer_address": { 00:21:52.126 "trtype": "TCP", 00:21:52.126 "adrfam": "IPv4", 00:21:52.126 "traddr": "10.0.0.1", 00:21:52.126 "trsvcid": "36982" 00:21:52.126 }, 00:21:52.126 "auth": { 00:21:52.126 "state": "completed", 00:21:52.126 "digest": "sha384", 00:21:52.126 "dhgroup": "ffdhe8192" 00:21:52.126 } 00:21:52.126 } 00:21:52.126 ]' 00:21:52.126 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.126 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:52.126 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.126 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.126 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.126 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.126 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.126 14:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.386 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:21:52.386 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:21:52.954 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.954 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:52.954 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.954 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.954 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.954 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.954 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:52.954 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:53.213 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:53.213 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.213 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:53.213 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:53.213 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:53.213 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.213 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:53.213 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.213 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.213 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.213 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:53.213 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.213 14:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.781 00:21:53.781 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.781 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.781 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.781 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.781 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.781 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.781 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.781 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.781 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.781 { 00:21:53.781 "cntlid": 95, 00:21:53.781 "qid": 0, 00:21:53.781 "state": "enabled", 00:21:53.781 "thread": "nvmf_tgt_poll_group_000", 00:21:53.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:53.781 "listen_address": { 00:21:53.781 "trtype": "TCP", 00:21:53.781 "adrfam": "IPv4", 00:21:53.781 "traddr": "10.0.0.2", 00:21:53.781 "trsvcid": "4420" 00:21:53.781 }, 00:21:53.781 "peer_address": { 00:21:53.781 "trtype": "TCP", 00:21:53.781 "adrfam": "IPv4", 00:21:53.781 "traddr": "10.0.0.1", 00:21:53.781 "trsvcid": "51722" 00:21:53.781 }, 00:21:53.781 "auth": { 00:21:53.781 "state": "completed", 00:21:53.781 "digest": "sha384", 00:21:53.781 "dhgroup": "ffdhe8192" 00:21:53.781 } 00:21:53.781 } 00:21:53.781 ]' 00:21:53.781 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.040 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:54.040 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.040 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:54.040 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.040 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.040 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.040 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.299 14:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:21:54.300 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.868 14:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.127 00:21:55.127 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.127 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.127 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.386 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.386 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.386 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.386 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.386 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.386 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.386 { 00:21:55.386 "cntlid": 97, 00:21:55.386 "qid": 0, 00:21:55.386 "state": "enabled", 00:21:55.386 "thread": "nvmf_tgt_poll_group_000", 00:21:55.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:55.386 "listen_address": { 00:21:55.386 "trtype": "TCP", 00:21:55.386 "adrfam": "IPv4", 00:21:55.386 "traddr": "10.0.0.2", 00:21:55.386 "trsvcid": "4420" 00:21:55.386 }, 00:21:55.386 "peer_address": { 00:21:55.386 "trtype": "TCP", 00:21:55.386 "adrfam": "IPv4", 00:21:55.386 "traddr": "10.0.0.1", 00:21:55.386 "trsvcid": "51754" 00:21:55.386 }, 00:21:55.387 "auth": { 00:21:55.387 "state": "completed", 00:21:55.387 "digest": "sha512", 00:21:55.387 "dhgroup": "null" 00:21:55.387 } 00:21:55.387 } 00:21:55.387 ]' 00:21:55.387 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.387 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.645 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.645 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:55.645 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.645 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.645 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.645 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.904 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:21:55.904 14:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:21:56.472 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.472 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:56.472 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.472 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.472 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.472 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.472 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:56.472 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:56.472 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:56.472 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.472 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.472 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:56.472 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:56.472 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.472 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.472 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.473 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.786 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.786 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.786 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.786 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.786 00:21:56.786 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.786 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.786 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.045 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.045 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.045 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.045 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.045 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.045 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.045 { 00:21:57.045 "cntlid": 99, 00:21:57.045 "qid": 0, 00:21:57.045 "state": "enabled", 00:21:57.045 "thread": "nvmf_tgt_poll_group_000", 00:21:57.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:57.045 "listen_address": { 00:21:57.045 "trtype": "TCP", 00:21:57.045 "adrfam": "IPv4", 00:21:57.045 "traddr": "10.0.0.2", 00:21:57.045 "trsvcid": "4420" 00:21:57.045 }, 00:21:57.045 "peer_address": { 00:21:57.045 "trtype": "TCP", 00:21:57.045 "adrfam": "IPv4", 00:21:57.045 "traddr": "10.0.0.1", 00:21:57.045 "trsvcid": "51774" 00:21:57.045 }, 00:21:57.045 "auth": { 00:21:57.045 "state": "completed", 00:21:57.045 "digest": "sha512", 00:21:57.045 "dhgroup": "null" 00:21:57.045 } 00:21:57.045 } 00:21:57.045 ]' 00:21:57.045 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.045 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.045 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.045 14:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:57.304 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.304 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.304 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.304 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.304 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:21:57.304 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:21:57.872 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.872 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:57.872 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.872 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.872 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.872 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.872 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:57.872 14:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:58.131 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:58.131 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.131 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:58.131 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:58.131 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:58.131 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.131 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.131 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.131 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.131 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.131 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.131 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.131 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.389 00:21:58.389 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.389 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.389 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.648 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.648 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.648 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.648 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.648 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.648 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.648 { 00:21:58.648 "cntlid": 101, 00:21:58.648 "qid": 0, 00:21:58.648 "state": "enabled", 00:21:58.648 "thread": "nvmf_tgt_poll_group_000", 00:21:58.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:58.648 "listen_address": { 00:21:58.648 "trtype": "TCP", 00:21:58.648 "adrfam": "IPv4", 00:21:58.648 "traddr": "10.0.0.2", 00:21:58.648 "trsvcid": "4420" 00:21:58.648 }, 00:21:58.648 "peer_address": { 00:21:58.648 "trtype": "TCP", 00:21:58.648 "adrfam": "IPv4", 00:21:58.648 "traddr": "10.0.0.1", 00:21:58.648 "trsvcid": "51796" 00:21:58.648 }, 00:21:58.648 "auth": { 00:21:58.648 "state": "completed", 00:21:58.648 "digest": "sha512", 00:21:58.648 "dhgroup": "null" 00:21:58.648 } 00:21:58.648 } 00:21:58.648 ]' 00:21:58.648 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.648 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.648 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.648 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:58.648 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.907 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.907 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.907 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.907 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:21:58.907 14:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:21:59.475 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.475 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:59.475 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.475 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.475 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.475 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.475 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:59.475 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:59.734 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:59.734 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.734 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.734 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:59.734 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:59.734 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.734 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:59.734 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.734 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.734 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.734 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:59.734 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:59.734 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:59.993 00:21:59.993 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.993 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.993 14:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.252 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.252 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.252 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.252 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.252 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.252 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.252 { 00:22:00.252 "cntlid": 103, 00:22:00.252 "qid": 0, 00:22:00.252 "state": "enabled", 00:22:00.252 "thread": "nvmf_tgt_poll_group_000", 00:22:00.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:00.252 "listen_address": { 00:22:00.252 "trtype": "TCP", 00:22:00.252 "adrfam": "IPv4", 00:22:00.252 "traddr": "10.0.0.2", 00:22:00.252 "trsvcid": "4420" 00:22:00.252 }, 00:22:00.252 "peer_address": { 00:22:00.252 "trtype": "TCP", 00:22:00.252 "adrfam": "IPv4", 00:22:00.252 "traddr": "10.0.0.1", 00:22:00.252 "trsvcid": "51820" 00:22:00.252 }, 00:22:00.252 "auth": { 00:22:00.252 "state": "completed", 00:22:00.252 "digest": "sha512", 00:22:00.252 "dhgroup": "null" 00:22:00.252 } 00:22:00.252 } 00:22:00.252 ]' 00:22:00.252 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.252 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.252 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.252 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:00.252 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.511 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.511 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.511 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.511 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:22:00.511 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:22:01.078 14:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.078 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:01.078 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.078 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.078 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.078 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:01.078 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.078 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:01.078 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:01.338 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:22:01.338 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.338 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.338 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:01.338 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:01.338 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.338 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.338 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.338 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.338 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.338 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.338 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.338 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.596 00:22:01.596 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.596 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.596 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.854 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.854 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.854 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.854 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.854 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.854 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.854 { 00:22:01.854 "cntlid": 105, 00:22:01.854 "qid": 0, 00:22:01.854 "state": "enabled", 00:22:01.854 "thread": "nvmf_tgt_poll_group_000", 00:22:01.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:01.854 "listen_address": { 00:22:01.854 "trtype": "TCP", 00:22:01.854 "adrfam": "IPv4", 00:22:01.854 "traddr": "10.0.0.2", 00:22:01.854 "trsvcid": "4420" 00:22:01.854 }, 00:22:01.854 "peer_address": { 00:22:01.854 "trtype": "TCP", 00:22:01.854 "adrfam": "IPv4", 00:22:01.854 "traddr": "10.0.0.1", 00:22:01.854 "trsvcid": "51850" 00:22:01.854 }, 00:22:01.854 "auth": { 00:22:01.854 "state": "completed", 00:22:01.854 "digest": "sha512", 00:22:01.854 "dhgroup": "ffdhe2048" 00:22:01.854 } 00:22:01.854 } 00:22:01.854 ]' 00:22:01.854 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.854 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.854 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.854 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:01.854 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.112 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.112 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.112 14:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.112 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:22:02.112 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:22:02.680 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.680 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.680 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.680 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.680 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.680 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.680 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:02.680 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:02.939 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:22:02.939 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.939 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.939 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:02.939 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:02.939 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.939 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.939 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.939 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.939 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.939 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.939 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.940 14:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.198 00:22:03.199 14:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.199 14:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.199 14:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.458 14:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.458 14:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.458 14:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.458 14:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.458 14:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.458 14:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.458 { 00:22:03.458 "cntlid": 107, 00:22:03.458 "qid": 0, 00:22:03.458 "state": "enabled", 00:22:03.458 "thread": "nvmf_tgt_poll_group_000", 00:22:03.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:03.459 "listen_address": { 00:22:03.459 "trtype": "TCP", 00:22:03.459 "adrfam": "IPv4", 00:22:03.459 "traddr": "10.0.0.2", 00:22:03.459 "trsvcid": "4420" 00:22:03.459 }, 00:22:03.459 "peer_address": { 00:22:03.459 "trtype": "TCP", 00:22:03.459 "adrfam": "IPv4", 00:22:03.459 "traddr": "10.0.0.1", 00:22:03.459 "trsvcid": "59270" 00:22:03.459 }, 00:22:03.459 "auth": { 00:22:03.459 "state": "completed", 00:22:03.459 "digest": "sha512", 00:22:03.459 "dhgroup": "ffdhe2048" 00:22:03.459 } 00:22:03.459 } 00:22:03.459 ]' 00:22:03.459 14:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.459 14:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.459 14:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.459 14:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:03.718 14:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.718 14:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.718 14:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.718 14:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.718 14:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:22:03.718 14:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:22:04.286 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.545 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.804 00:22:04.804 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.804 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.804 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.063 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.063 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.063 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.063 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.063 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.063 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.063 { 00:22:05.063 "cntlid": 109, 00:22:05.063 "qid": 0, 00:22:05.063 "state": "enabled", 00:22:05.063 "thread": "nvmf_tgt_poll_group_000", 00:22:05.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:05.063 "listen_address": { 00:22:05.063 "trtype": "TCP", 00:22:05.063 "adrfam": "IPv4", 00:22:05.063 "traddr": "10.0.0.2", 00:22:05.063 "trsvcid": "4420" 00:22:05.063 }, 00:22:05.063 "peer_address": { 00:22:05.063 "trtype": "TCP", 00:22:05.063 "adrfam": "IPv4", 00:22:05.063 "traddr": "10.0.0.1", 00:22:05.063 "trsvcid": "59294" 00:22:05.063 }, 00:22:05.063 "auth": { 00:22:05.063 "state": "completed", 00:22:05.063 "digest": "sha512", 00:22:05.063 "dhgroup": "ffdhe2048" 00:22:05.063 } 00:22:05.063 } 00:22:05.063 ]' 00:22:05.063 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.063 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.063 14:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.322 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:05.322 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.322 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.322 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.322 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.322 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:22:05.322 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:22:05.906 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.906 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:05.906 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.906 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.906 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.906 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.906 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:05.906 14:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:06.302 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:22:06.302 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.302 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:06.302 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:06.302 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:06.302 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.302 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:06.302 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.302 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.302 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.302 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:06.302 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.303 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.561 00:22:06.561 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.561 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.561 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.561 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.561 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.561 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.561 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.561 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.561 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.561 { 00:22:06.561 "cntlid": 111, 00:22:06.561 "qid": 0, 00:22:06.561 "state": "enabled", 00:22:06.561 "thread": "nvmf_tgt_poll_group_000", 00:22:06.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:06.561 "listen_address": { 00:22:06.561 "trtype": "TCP", 00:22:06.561 "adrfam": "IPv4", 00:22:06.561 "traddr": "10.0.0.2", 00:22:06.561 "trsvcid": "4420" 00:22:06.561 }, 00:22:06.561 "peer_address": { 00:22:06.561 "trtype": "TCP", 00:22:06.561 "adrfam": "IPv4", 00:22:06.561 "traddr": "10.0.0.1", 00:22:06.561 "trsvcid": "59322" 00:22:06.561 }, 00:22:06.561 "auth": { 00:22:06.561 "state": "completed", 00:22:06.561 "digest": "sha512", 00:22:06.561 "dhgroup": "ffdhe2048" 00:22:06.561 } 00:22:06.561 } 00:22:06.561 ]' 00:22:06.820 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.820 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.820 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.820 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:06.820 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.820 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.820 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.820 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.080 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:22:07.080 14:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:22:07.648 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.649 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:07.649 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.649 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.649 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.649 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:07.649 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.649 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:07.649 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:07.907 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:22:07.907 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.907 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.907 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:07.907 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:07.907 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.907 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.907 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.907 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.907 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.907 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.907 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.907 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.167 00:22:08.167 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.167 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.167 14:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.424 14:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.424 14:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.424 14:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.424 14:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.424 14:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.424 14:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.424 { 00:22:08.424 "cntlid": 113, 00:22:08.424 "qid": 0, 00:22:08.424 "state": "enabled", 00:22:08.424 "thread": "nvmf_tgt_poll_group_000", 00:22:08.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:08.424 "listen_address": { 00:22:08.424 "trtype": "TCP", 00:22:08.424 "adrfam": "IPv4", 00:22:08.424 "traddr": "10.0.0.2", 00:22:08.424 "trsvcid": "4420" 00:22:08.424 }, 00:22:08.424 "peer_address": { 00:22:08.424 "trtype": "TCP", 00:22:08.424 "adrfam": "IPv4", 00:22:08.424 "traddr": "10.0.0.1", 00:22:08.424 "trsvcid": "59350" 00:22:08.424 }, 00:22:08.424 "auth": { 00:22:08.424 "state": "completed", 00:22:08.424 "digest": "sha512", 00:22:08.424 "dhgroup": "ffdhe3072" 00:22:08.424 } 00:22:08.424 } 00:22:08.424 ]' 00:22:08.424 14:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.424 14:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.424 14:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.424 14:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:08.424 14:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.424 14:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.424 14:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.424 14:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.683 14:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:22:08.683 14:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:22:09.250 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.250 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:09.250 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.250 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.250 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.251 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.251 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:09.251 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:09.510 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:09.510 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.510 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:09.510 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:09.510 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:09.510 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.510 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.510 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.510 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.510 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.510 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.510 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.510 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.769 00:22:09.769 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.769 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.769 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.027 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.027 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.027 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.027 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.027 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.027 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.027 { 00:22:10.027 "cntlid": 115, 00:22:10.027 "qid": 0, 00:22:10.027 "state": "enabled", 00:22:10.027 "thread": "nvmf_tgt_poll_group_000", 00:22:10.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:10.027 "listen_address": { 00:22:10.027 "trtype": "TCP", 00:22:10.027 "adrfam": "IPv4", 00:22:10.027 "traddr": "10.0.0.2", 00:22:10.027 "trsvcid": "4420" 00:22:10.027 }, 00:22:10.027 "peer_address": { 00:22:10.027 "trtype": "TCP", 00:22:10.027 "adrfam": "IPv4", 00:22:10.027 "traddr": "10.0.0.1", 00:22:10.027 "trsvcid": "59372" 00:22:10.027 }, 00:22:10.027 "auth": { 00:22:10.027 "state": "completed", 00:22:10.027 "digest": "sha512", 00:22:10.027 "dhgroup": "ffdhe3072" 00:22:10.027 } 00:22:10.027 } 00:22:10.027 ]' 00:22:10.027 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.027 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.027 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.027 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:10.027 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.027 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.027 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.027 14:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.286 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:22:10.286 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:22:10.853 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.853 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:10.853 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.853 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.853 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.853 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.853 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:10.853 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:11.113 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:11.113 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.113 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:11.113 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:11.113 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:11.113 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.113 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.113 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.113 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.113 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.113 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.113 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.113 14:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.372 00:22:11.372 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.372 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.372 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.630 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.630 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.630 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.630 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.630 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.630 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.630 { 00:22:11.630 "cntlid": 117, 00:22:11.630 "qid": 0, 00:22:11.630 "state": "enabled", 00:22:11.630 "thread": "nvmf_tgt_poll_group_000", 00:22:11.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:11.630 "listen_address": { 00:22:11.630 "trtype": "TCP", 00:22:11.630 "adrfam": "IPv4", 00:22:11.630 "traddr": "10.0.0.2", 00:22:11.630 "trsvcid": "4420" 00:22:11.630 }, 00:22:11.630 "peer_address": { 00:22:11.630 "trtype": "TCP", 00:22:11.630 "adrfam": "IPv4", 00:22:11.630 "traddr": "10.0.0.1", 00:22:11.630 "trsvcid": "59398" 00:22:11.630 }, 00:22:11.630 "auth": { 00:22:11.630 "state": "completed", 00:22:11.630 "digest": "sha512", 00:22:11.630 "dhgroup": "ffdhe3072" 00:22:11.630 } 00:22:11.630 } 00:22:11.630 ]' 00:22:11.630 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.630 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.631 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.631 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:11.631 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.631 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.631 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.631 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.889 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:22:11.889 14:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:22:12.457 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.457 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:12.457 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.457 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.457 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.457 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.457 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:12.457 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:12.716 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:12.716 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.716 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:12.716 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:12.716 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:12.716 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.716 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:12.716 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.716 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.716 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.716 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:12.716 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.716 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.976 00:22:12.976 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.976 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.976 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.976 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.976 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.976 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.976 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.976 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.976 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.976 { 00:22:12.976 "cntlid": 119, 00:22:12.976 "qid": 0, 00:22:12.976 "state": "enabled", 00:22:12.976 "thread": "nvmf_tgt_poll_group_000", 00:22:12.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:12.976 "listen_address": { 00:22:12.976 "trtype": "TCP", 00:22:12.976 "adrfam": "IPv4", 00:22:12.976 "traddr": "10.0.0.2", 00:22:12.976 "trsvcid": "4420" 00:22:12.976 }, 00:22:12.976 "peer_address": { 00:22:12.976 "trtype": "TCP", 00:22:12.976 "adrfam": "IPv4", 00:22:12.976 "traddr": "10.0.0.1", 00:22:12.976 "trsvcid": "33984" 00:22:12.976 }, 00:22:12.976 "auth": { 00:22:12.976 "state": "completed", 00:22:12.976 "digest": "sha512", 00:22:12.976 "dhgroup": "ffdhe3072" 00:22:12.976 } 00:22:12.976 } 00:22:12.976 ]' 00:22:12.976 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.235 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.235 14:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.235 14:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:13.235 14:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.235 14:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.235 14:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.235 14:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.494 14:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:22:13.494 14:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:22:14.061 14:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.061 14:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:14.061 14:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.061 14:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.061 14:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.061 14:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.061 14:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.061 14:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:14.061 14:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:14.320 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:14.320 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.320 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:14.320 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:14.320 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:14.321 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.321 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.321 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.321 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.321 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.321 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.321 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.321 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.580 00:22:14.580 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:14.580 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.580 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.580 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.839 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.839 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.839 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.839 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.839 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.839 { 00:22:14.839 "cntlid": 121, 00:22:14.839 "qid": 0, 00:22:14.839 "state": "enabled", 00:22:14.839 "thread": "nvmf_tgt_poll_group_000", 00:22:14.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:14.839 "listen_address": { 00:22:14.839 "trtype": "TCP", 00:22:14.839 "adrfam": "IPv4", 00:22:14.839 "traddr": "10.0.0.2", 00:22:14.839 "trsvcid": "4420" 00:22:14.839 }, 00:22:14.839 "peer_address": { 00:22:14.839 "trtype": "TCP", 00:22:14.839 "adrfam": "IPv4", 00:22:14.839 "traddr": "10.0.0.1", 00:22:14.839 "trsvcid": "34006" 00:22:14.839 }, 00:22:14.839 "auth": { 00:22:14.839 "state": "completed", 00:22:14.839 "digest": "sha512", 00:22:14.839 "dhgroup": "ffdhe4096" 00:22:14.839 } 00:22:14.839 } 00:22:14.839 ]' 00:22:14.839 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.839 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.839 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:14.839 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:14.839 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:14.839 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.839 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.839 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.098 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:22:15.098 14:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:22:15.667 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.667 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:15.667 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.667 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.667 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.667 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.667 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:15.667 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:15.927 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:15.927 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.927 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:15.927 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:15.927 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:15.927 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.927 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.927 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.927 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.927 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.927 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.927 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.927 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.186 00:22:16.186 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.186 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.186 14:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.445 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.445 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.445 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.445 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.445 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.445 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.445 { 00:22:16.445 "cntlid": 123, 00:22:16.445 "qid": 0, 00:22:16.445 "state": "enabled", 00:22:16.445 "thread": "nvmf_tgt_poll_group_000", 00:22:16.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:16.445 "listen_address": { 00:22:16.445 "trtype": "TCP", 00:22:16.445 "adrfam": "IPv4", 00:22:16.445 "traddr": "10.0.0.2", 00:22:16.445 "trsvcid": "4420" 00:22:16.445 }, 00:22:16.445 "peer_address": { 00:22:16.445 "trtype": "TCP", 00:22:16.445 "adrfam": "IPv4", 00:22:16.445 "traddr": "10.0.0.1", 00:22:16.445 "trsvcid": "34032" 00:22:16.445 }, 00:22:16.445 "auth": { 00:22:16.445 "state": "completed", 00:22:16.445 "digest": "sha512", 00:22:16.445 "dhgroup": "ffdhe4096" 00:22:16.445 } 00:22:16.445 } 00:22:16.445 ]' 00:22:16.445 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.445 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.445 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.445 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:16.445 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.445 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.445 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.445 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.704 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:22:16.704 14:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:22:17.272 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.272 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:17.272 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.272 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.272 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.272 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.272 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:17.272 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:17.531 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:17.531 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.531 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:17.531 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:17.531 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:17.531 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.531 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.531 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.531 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.531 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.531 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.532 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.532 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.791 00:22:17.791 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.791 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:17.791 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.050 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.050 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.050 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.050 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.050 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.050 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.050 { 00:22:18.050 "cntlid": 125, 00:22:18.050 "qid": 0, 00:22:18.050 "state": "enabled", 00:22:18.050 "thread": "nvmf_tgt_poll_group_000", 00:22:18.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:18.050 "listen_address": { 00:22:18.050 "trtype": "TCP", 00:22:18.050 "adrfam": "IPv4", 00:22:18.050 "traddr": "10.0.0.2", 00:22:18.050 "trsvcid": "4420" 00:22:18.050 }, 00:22:18.050 "peer_address": { 00:22:18.050 "trtype": "TCP", 00:22:18.050 "adrfam": "IPv4", 00:22:18.050 "traddr": "10.0.0.1", 00:22:18.050 "trsvcid": "34058" 00:22:18.050 }, 00:22:18.050 "auth": { 00:22:18.050 "state": "completed", 00:22:18.050 "digest": "sha512", 00:22:18.050 "dhgroup": "ffdhe4096" 00:22:18.050 } 00:22:18.050 } 00:22:18.050 ]' 00:22:18.050 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.050 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.050 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.050 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:18.050 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.050 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.050 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.050 14:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.309 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:22:18.309 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:22:18.876 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.876 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:18.876 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.876 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.876 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.876 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:18.876 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:18.876 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:19.135 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:19.135 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.135 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:19.135 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:19.135 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:19.135 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.135 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:19.135 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.135 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.135 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.135 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:19.135 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.135 14:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.394 00:22:19.394 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.394 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.394 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.653 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.653 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.653 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.653 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.653 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.653 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.653 { 00:22:19.653 "cntlid": 127, 00:22:19.653 "qid": 0, 00:22:19.653 "state": "enabled", 00:22:19.653 "thread": "nvmf_tgt_poll_group_000", 00:22:19.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:19.653 "listen_address": { 00:22:19.653 "trtype": "TCP", 00:22:19.653 "adrfam": "IPv4", 00:22:19.653 "traddr": "10.0.0.2", 00:22:19.653 "trsvcid": "4420" 00:22:19.653 }, 00:22:19.653 "peer_address": { 00:22:19.653 "trtype": "TCP", 00:22:19.653 "adrfam": "IPv4", 00:22:19.653 "traddr": "10.0.0.1", 00:22:19.653 "trsvcid": "34084" 00:22:19.653 }, 00:22:19.653 "auth": { 00:22:19.653 "state": "completed", 00:22:19.653 "digest": "sha512", 00:22:19.653 "dhgroup": "ffdhe4096" 00:22:19.653 } 00:22:19.653 } 00:22:19.653 ]' 00:22:19.653 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.653 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.653 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.653 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:19.653 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.653 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.653 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.653 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.912 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:22:19.912 14:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:22:20.480 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.480 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:20.480 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.480 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.481 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.481 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:20.481 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:20.481 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:20.481 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:20.740 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:20.740 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.740 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:20.740 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:20.740 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:20.740 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.740 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.740 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.740 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.740 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.740 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.740 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.740 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.999 00:22:20.999 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.999 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.999 14:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.258 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.258 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.258 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.258 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.258 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.258 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.258 { 00:22:21.258 "cntlid": 129, 00:22:21.258 "qid": 0, 00:22:21.258 "state": "enabled", 00:22:21.258 "thread": "nvmf_tgt_poll_group_000", 00:22:21.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:21.258 "listen_address": { 00:22:21.258 "trtype": "TCP", 00:22:21.258 "adrfam": "IPv4", 00:22:21.258 "traddr": "10.0.0.2", 00:22:21.258 "trsvcid": "4420" 00:22:21.258 }, 00:22:21.258 "peer_address": { 00:22:21.258 "trtype": "TCP", 00:22:21.258 "adrfam": "IPv4", 00:22:21.258 "traddr": "10.0.0.1", 00:22:21.258 "trsvcid": "34110" 00:22:21.258 }, 00:22:21.258 "auth": { 00:22:21.258 "state": "completed", 00:22:21.258 "digest": "sha512", 00:22:21.258 "dhgroup": "ffdhe6144" 00:22:21.258 } 00:22:21.258 } 00:22:21.258 ]' 00:22:21.258 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.258 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.258 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.258 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:21.258 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.258 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.258 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.258 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.518 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:22:21.518 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:22:22.087 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.087 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:22.087 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.087 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.087 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.087 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:22.087 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:22.087 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:22.346 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:22.346 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.346 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:22.346 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:22.346 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:22.346 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.346 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.346 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.346 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.346 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.346 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.346 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.346 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.604 00:22:22.864 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.864 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.864 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.864 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.864 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.864 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.864 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.864 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.864 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.864 { 00:22:22.864 "cntlid": 131, 00:22:22.864 "qid": 0, 00:22:22.864 "state": "enabled", 00:22:22.864 "thread": "nvmf_tgt_poll_group_000", 00:22:22.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:22.864 "listen_address": { 00:22:22.864 "trtype": "TCP", 00:22:22.864 "adrfam": "IPv4", 00:22:22.864 "traddr": "10.0.0.2", 00:22:22.864 "trsvcid": "4420" 00:22:22.864 }, 00:22:22.864 "peer_address": { 00:22:22.864 "trtype": "TCP", 00:22:22.864 "adrfam": "IPv4", 00:22:22.864 "traddr": "10.0.0.1", 00:22:22.864 "trsvcid": "60012" 00:22:22.864 }, 00:22:22.864 "auth": { 00:22:22.864 "state": "completed", 00:22:22.864 "digest": "sha512", 00:22:22.864 "dhgroup": "ffdhe6144" 00:22:22.864 } 00:22:22.864 } 00:22:22.864 ]' 00:22:22.864 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.123 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.123 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.123 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:23.123 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.123 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.123 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.123 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.382 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:22:23.382 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.950 14:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.518 00:22:24.518 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.518 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.518 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.519 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.519 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.519 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.519 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.519 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.519 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:24.519 { 00:22:24.519 "cntlid": 133, 00:22:24.519 "qid": 0, 00:22:24.519 "state": "enabled", 00:22:24.519 "thread": "nvmf_tgt_poll_group_000", 00:22:24.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:24.519 "listen_address": { 00:22:24.519 "trtype": "TCP", 00:22:24.519 "adrfam": "IPv4", 00:22:24.519 "traddr": "10.0.0.2", 00:22:24.519 "trsvcid": "4420" 00:22:24.519 }, 00:22:24.519 "peer_address": { 00:22:24.519 "trtype": "TCP", 00:22:24.519 "adrfam": "IPv4", 00:22:24.519 "traddr": "10.0.0.1", 00:22:24.519 "trsvcid": "60042" 00:22:24.519 }, 00:22:24.519 "auth": { 00:22:24.519 "state": "completed", 00:22:24.519 "digest": "sha512", 00:22:24.519 "dhgroup": "ffdhe6144" 00:22:24.519 } 00:22:24.519 } 00:22:24.519 ]' 00:22:24.519 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:24.778 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:24.778 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:24.778 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:24.778 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:24.778 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.778 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.778 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.036 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:22:25.036 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:22:25.604 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.604 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:25.604 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.604 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.604 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.604 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.604 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:25.604 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:25.863 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:25.863 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:25.863 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:25.863 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:25.863 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:25.863 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.863 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:25.863 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.863 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.863 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.863 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:25.863 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:25.863 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:26.122 00:22:26.122 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.122 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.122 14:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.381 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.381 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.381 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.381 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.381 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.381 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.381 { 00:22:26.381 "cntlid": 135, 00:22:26.381 "qid": 0, 00:22:26.381 "state": "enabled", 00:22:26.381 "thread": "nvmf_tgt_poll_group_000", 00:22:26.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:26.381 "listen_address": { 00:22:26.381 "trtype": "TCP", 00:22:26.381 "adrfam": "IPv4", 00:22:26.381 "traddr": "10.0.0.2", 00:22:26.382 "trsvcid": "4420" 00:22:26.382 }, 00:22:26.382 "peer_address": { 00:22:26.382 "trtype": "TCP", 00:22:26.382 "adrfam": "IPv4", 00:22:26.382 "traddr": "10.0.0.1", 00:22:26.382 "trsvcid": "60058" 00:22:26.382 }, 00:22:26.382 "auth": { 00:22:26.382 "state": "completed", 00:22:26.382 "digest": "sha512", 00:22:26.382 "dhgroup": "ffdhe6144" 00:22:26.382 } 00:22:26.382 } 00:22:26.382 ]' 00:22:26.382 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.382 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.382 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.382 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:26.382 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:26.382 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.382 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.382 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.640 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:22:26.640 14:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:22:27.207 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.207 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:27.207 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.208 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.208 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.208 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:27.208 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:27.208 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:27.208 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:27.465 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:27.465 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:27.465 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:27.465 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:27.465 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:27.465 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.465 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.465 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.465 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.465 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.465 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.465 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.465 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.033 00:22:28.033 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.033 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.033 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.033 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.033 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.033 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.033 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.291 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.291 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.291 { 00:22:28.291 "cntlid": 137, 00:22:28.291 "qid": 0, 00:22:28.291 "state": "enabled", 00:22:28.291 "thread": "nvmf_tgt_poll_group_000", 00:22:28.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:28.291 "listen_address": { 00:22:28.291 "trtype": "TCP", 00:22:28.291 "adrfam": "IPv4", 00:22:28.291 "traddr": "10.0.0.2", 00:22:28.291 "trsvcid": "4420" 00:22:28.291 }, 00:22:28.291 "peer_address": { 00:22:28.291 "trtype": "TCP", 00:22:28.291 "adrfam": "IPv4", 00:22:28.291 "traddr": "10.0.0.1", 00:22:28.291 "trsvcid": "60086" 00:22:28.291 }, 00:22:28.291 "auth": { 00:22:28.291 "state": "completed", 00:22:28.291 "digest": "sha512", 00:22:28.291 "dhgroup": "ffdhe8192" 00:22:28.291 } 00:22:28.291 } 00:22:28.291 ]' 00:22:28.291 14:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.291 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.291 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:28.291 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:28.291 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:28.291 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.291 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.291 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.550 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:22:28.550 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:22:29.116 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.116 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:29.116 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.116 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.116 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.116 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:29.116 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:29.116 14:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:29.376 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:29.376 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:29.376 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:29.376 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:29.376 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:29.376 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.376 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.376 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.376 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.376 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.376 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.376 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.376 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.946 00:22:29.946 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.946 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.946 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.946 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.946 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.946 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.946 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.947 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.947 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.947 { 00:22:29.947 "cntlid": 139, 00:22:29.947 "qid": 0, 00:22:29.947 "state": "enabled", 00:22:29.947 "thread": "nvmf_tgt_poll_group_000", 00:22:29.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:29.947 "listen_address": { 00:22:29.947 "trtype": "TCP", 00:22:29.947 "adrfam": "IPv4", 00:22:29.947 "traddr": "10.0.0.2", 00:22:29.947 "trsvcid": "4420" 00:22:29.947 }, 00:22:29.947 "peer_address": { 00:22:29.947 "trtype": "TCP", 00:22:29.947 "adrfam": "IPv4", 00:22:29.947 "traddr": "10.0.0.1", 00:22:29.947 "trsvcid": "60118" 00:22:29.947 }, 00:22:29.947 "auth": { 00:22:29.947 "state": "completed", 00:22:29.947 "digest": "sha512", 00:22:29.947 "dhgroup": "ffdhe8192" 00:22:29.947 } 00:22:29.947 } 00:22:29.947 ]' 00:22:29.947 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.205 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.205 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.205 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:30.205 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.205 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.205 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.205 14:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.463 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:22:30.463 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: --dhchap-ctrl-secret DHHC-1:02:MzEzNjI1YjA2ZjI0MTY2NmQ1YTdhODc1ZTgyYmEwMGVhNGI0YWNlYjlhY2FhOGI4EFQTsQ==: 00:22:31.028 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.028 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:31.028 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.028 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.028 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.028 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.028 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:31.029 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:31.287 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:31.287 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.287 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:31.287 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:31.287 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:31.287 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.287 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.287 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.287 14:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.287 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.287 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.287 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.287 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.544 00:22:31.803 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:31.803 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:31.803 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.803 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.803 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.803 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.803 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.803 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.803 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:31.803 { 00:22:31.803 "cntlid": 141, 00:22:31.803 "qid": 0, 00:22:31.803 "state": "enabled", 00:22:31.803 "thread": "nvmf_tgt_poll_group_000", 00:22:31.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:31.803 "listen_address": { 00:22:31.803 "trtype": "TCP", 00:22:31.803 "adrfam": "IPv4", 00:22:31.803 "traddr": "10.0.0.2", 00:22:31.803 "trsvcid": "4420" 00:22:31.803 }, 00:22:31.803 "peer_address": { 00:22:31.803 "trtype": "TCP", 00:22:31.803 "adrfam": "IPv4", 00:22:31.803 "traddr": "10.0.0.1", 00:22:31.803 "trsvcid": "60146" 00:22:31.803 }, 00:22:31.803 "auth": { 00:22:31.803 "state": "completed", 00:22:31.803 "digest": "sha512", 00:22:31.803 "dhgroup": "ffdhe8192" 00:22:31.803 } 00:22:31.803 } 00:22:31.803 ]' 00:22:31.803 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.062 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.062 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.062 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:32.062 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.062 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.062 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.062 14:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.321 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:22:32.321 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:01:ZTRiYWFhZjQ2NjRhNGE2MjM0Y2I5ODliMGU2NDAzNzDxNtqK: 00:22:32.889 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.889 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:32.889 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.889 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.889 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.889 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:32.889 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:32.889 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:33.148 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:33.148 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.148 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:33.148 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:33.148 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:33.148 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.148 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:33.148 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.148 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.148 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.148 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:33.149 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:33.149 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:33.407 00:22:33.407 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.407 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.407 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.667 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.667 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.667 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.667 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.667 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.667 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:33.667 { 00:22:33.667 "cntlid": 143, 00:22:33.667 "qid": 0, 00:22:33.667 "state": "enabled", 00:22:33.667 "thread": "nvmf_tgt_poll_group_000", 00:22:33.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:33.667 "listen_address": { 00:22:33.667 "trtype": "TCP", 00:22:33.667 "adrfam": "IPv4", 00:22:33.667 "traddr": "10.0.0.2", 00:22:33.667 "trsvcid": "4420" 00:22:33.667 }, 00:22:33.667 "peer_address": { 00:22:33.667 "trtype": "TCP", 00:22:33.667 "adrfam": "IPv4", 00:22:33.667 "traddr": "10.0.0.1", 00:22:33.667 "trsvcid": "34214" 00:22:33.667 }, 00:22:33.667 "auth": { 00:22:33.667 "state": "completed", 00:22:33.667 "digest": "sha512", 00:22:33.667 "dhgroup": "ffdhe8192" 00:22:33.667 } 00:22:33.667 } 00:22:33.667 ]' 00:22:33.667 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:33.667 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:33.667 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:33.926 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:33.926 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:33.926 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.926 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.926 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.185 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:22:34.185 14:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:22:34.752 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.752 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:34.752 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.752 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.752 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.752 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:34.752 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:34.752 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:34.752 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:34.752 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:34.752 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:35.012 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:35.012 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:35.012 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:35.012 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:35.012 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:35.012 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.012 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.012 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.012 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.012 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.012 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.012 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.012 14:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.271 00:22:35.530 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.530 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.530 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.530 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.530 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.530 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.530 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.530 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.530 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.530 { 00:22:35.530 "cntlid": 145, 00:22:35.530 "qid": 0, 00:22:35.530 "state": "enabled", 00:22:35.530 "thread": "nvmf_tgt_poll_group_000", 00:22:35.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:35.530 "listen_address": { 00:22:35.530 "trtype": "TCP", 00:22:35.530 "adrfam": "IPv4", 00:22:35.530 "traddr": "10.0.0.2", 00:22:35.530 "trsvcid": "4420" 00:22:35.530 }, 00:22:35.530 "peer_address": { 00:22:35.530 "trtype": "TCP", 00:22:35.530 "adrfam": "IPv4", 00:22:35.530 "traddr": "10.0.0.1", 00:22:35.530 "trsvcid": "34236" 00:22:35.530 }, 00:22:35.530 "auth": { 00:22:35.530 "state": "completed", 00:22:35.530 "digest": "sha512", 00:22:35.530 "dhgroup": "ffdhe8192" 00:22:35.530 } 00:22:35.530 } 00:22:35.530 ]' 00:22:35.530 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.789 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.789 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.789 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:35.789 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.789 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.789 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.789 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.048 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:22:36.048 14:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZjZjMzY2YjNjZTAyMzRlMzE2OTFiNWY3NTA2NGJmZGMxNDFiMTdiYjQzZDBiNTEyaHXYjg==: --dhchap-ctrl-secret DHHC-1:03:ZDUzZGRjZWRjMjQ2MmRiYjdhNTRhNTNmYzkwYzhlNTc0ZjdkZGIwYmI1MzYwM2YxYzkxZDQ3MzA4Mzk4ZGNjYUbyWVM=: 00:22:36.616 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.616 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:36.616 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.616 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.616 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.616 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:22:36.616 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.616 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.616 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.616 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:36.616 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:36.616 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:36.616 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:36.616 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:36.616 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:36.616 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:36.616 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:36.616 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:36.616 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:36.875 request: 00:22:36.875 { 00:22:36.875 "name": "nvme0", 00:22:36.875 "trtype": "tcp", 00:22:36.875 "traddr": "10.0.0.2", 00:22:36.875 "adrfam": "ipv4", 00:22:36.875 "trsvcid": "4420", 00:22:36.875 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:36.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:36.875 "prchk_reftag": false, 00:22:36.875 "prchk_guard": false, 00:22:36.875 "hdgst": false, 00:22:36.875 "ddgst": false, 00:22:36.875 "dhchap_key": "key2", 00:22:36.875 "allow_unrecognized_csi": false, 00:22:36.875 "method": "bdev_nvme_attach_controller", 00:22:36.875 "req_id": 1 00:22:36.875 } 00:22:36.875 Got JSON-RPC error response 00:22:36.875 response: 00:22:36.875 { 00:22:36.875 "code": -5, 00:22:36.875 "message": "Input/output error" 00:22:36.875 } 00:22:36.875 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:36.875 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:36.875 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:36.875 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:36.875 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:36.875 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.875 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.875 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.875 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.875 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.875 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.135 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.135 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:37.135 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:37.135 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:37.135 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:37.135 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.135 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:37.135 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.135 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:37.135 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:37.135 14:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:37.393 request: 00:22:37.393 { 00:22:37.393 "name": "nvme0", 00:22:37.393 "trtype": "tcp", 00:22:37.393 "traddr": "10.0.0.2", 00:22:37.393 "adrfam": "ipv4", 00:22:37.393 "trsvcid": "4420", 00:22:37.393 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:37.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:37.393 "prchk_reftag": false, 00:22:37.393 "prchk_guard": false, 00:22:37.393 "hdgst": false, 00:22:37.393 "ddgst": false, 00:22:37.393 "dhchap_key": "key1", 00:22:37.393 "dhchap_ctrlr_key": "ckey2", 00:22:37.393 "allow_unrecognized_csi": false, 00:22:37.393 "method": "bdev_nvme_attach_controller", 00:22:37.393 "req_id": 1 00:22:37.393 } 00:22:37.393 Got JSON-RPC error response 00:22:37.393 response: 00:22:37.393 { 00:22:37.393 "code": -5, 00:22:37.393 "message": "Input/output error" 00:22:37.393 } 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.393 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.960 request: 00:22:37.960 { 00:22:37.960 "name": "nvme0", 00:22:37.960 "trtype": "tcp", 00:22:37.960 "traddr": "10.0.0.2", 00:22:37.960 "adrfam": "ipv4", 00:22:37.960 "trsvcid": "4420", 00:22:37.960 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:37.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:37.960 "prchk_reftag": false, 00:22:37.960 "prchk_guard": false, 00:22:37.960 "hdgst": false, 00:22:37.960 "ddgst": false, 00:22:37.960 "dhchap_key": "key1", 00:22:37.960 "dhchap_ctrlr_key": "ckey1", 00:22:37.960 "allow_unrecognized_csi": false, 00:22:37.960 "method": "bdev_nvme_attach_controller", 00:22:37.960 "req_id": 1 00:22:37.960 } 00:22:37.960 Got JSON-RPC error response 00:22:37.960 response: 00:22:37.960 { 00:22:37.960 "code": -5, 00:22:37.961 "message": "Input/output error" 00:22:37.961 } 00:22:37.961 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:37.961 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:37.961 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:37.961 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:37.961 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.961 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.961 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.961 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.961 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1574920 00:22:37.961 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1574920 ']' 00:22:37.961 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1574920 00:22:37.961 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:37.961 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:37.961 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1574920 00:22:37.961 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:37.961 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:37.961 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1574920' 00:22:37.961 killing process with pid 1574920 00:22:37.961 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1574920 00:22:37.961 14:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1574920 00:22:38.219 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:38.219 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:38.219 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:38.219 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.219 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1596630 00:22:38.219 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1596630 00:22:38.219 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:38.220 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1596630 ']' 00:22:38.220 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.220 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:38.220 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.220 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:38.220 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.479 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.479 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:38.479 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:38.479 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:38.479 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.479 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.479 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:38.479 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1596630 00:22:38.479 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1596630 ']' 00:22:38.479 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.479 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:38.479 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.479 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:38.479 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.739 null0 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.F7e 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.mk6 ]] 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mk6 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.gey 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.739 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.ADY ]] 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ADY 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.RM4 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.gV9 ]] 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gV9 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.bXe 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:38.740 14:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:39.676 nvme0n1 00:22:39.676 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:39.676 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:39.676 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.676 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.676 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.676 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.676 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.935 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.935 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:39.935 { 00:22:39.935 "cntlid": 1, 00:22:39.935 "qid": 0, 00:22:39.935 "state": "enabled", 00:22:39.935 "thread": "nvmf_tgt_poll_group_000", 00:22:39.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:39.935 "listen_address": { 00:22:39.935 "trtype": "TCP", 00:22:39.935 "adrfam": "IPv4", 00:22:39.935 "traddr": "10.0.0.2", 00:22:39.935 "trsvcid": "4420" 00:22:39.935 }, 00:22:39.935 "peer_address": { 00:22:39.935 "trtype": "TCP", 00:22:39.935 "adrfam": "IPv4", 00:22:39.935 "traddr": "10.0.0.1", 00:22:39.935 "trsvcid": "34308" 00:22:39.935 }, 00:22:39.935 "auth": { 00:22:39.935 "state": "completed", 00:22:39.935 "digest": "sha512", 00:22:39.935 "dhgroup": "ffdhe8192" 00:22:39.935 } 00:22:39.935 } 00:22:39.935 ]' 00:22:39.935 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:39.935 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:39.935 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:39.935 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:39.935 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:39.935 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.935 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.935 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.194 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:22:40.194 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:22:40.761 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.761 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:40.761 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.761 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.761 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.761 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:40.762 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.762 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.762 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.762 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:40.762 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:41.021 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:41.021 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:41.021 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:41.021 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:41.021 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.021 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:41.021 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.021 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:41.021 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:41.021 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:41.021 request: 00:22:41.021 { 00:22:41.021 "name": "nvme0", 00:22:41.021 "trtype": "tcp", 00:22:41.021 "traddr": "10.0.0.2", 00:22:41.021 "adrfam": "ipv4", 00:22:41.021 "trsvcid": "4420", 00:22:41.021 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:41.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:41.021 "prchk_reftag": false, 00:22:41.021 "prchk_guard": false, 00:22:41.021 "hdgst": false, 00:22:41.021 "ddgst": false, 00:22:41.021 "dhchap_key": "key3", 00:22:41.021 "allow_unrecognized_csi": false, 00:22:41.021 "method": "bdev_nvme_attach_controller", 00:22:41.021 "req_id": 1 00:22:41.021 } 00:22:41.021 Got JSON-RPC error response 00:22:41.021 response: 00:22:41.021 { 00:22:41.021 "code": -5, 00:22:41.021 "message": "Input/output error" 00:22:41.021 } 00:22:41.280 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:41.280 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:41.280 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:41.280 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:41.280 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:41.280 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:41.280 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:41.280 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:41.280 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:41.280 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:41.280 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:41.280 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:41.280 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.280 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:41.280 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.280 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:41.280 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:41.280 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:41.539 request: 00:22:41.539 { 00:22:41.539 "name": "nvme0", 00:22:41.539 "trtype": "tcp", 00:22:41.539 "traddr": "10.0.0.2", 00:22:41.539 "adrfam": "ipv4", 00:22:41.539 "trsvcid": "4420", 00:22:41.539 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:41.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:41.539 "prchk_reftag": false, 00:22:41.539 "prchk_guard": false, 00:22:41.539 "hdgst": false, 00:22:41.539 "ddgst": false, 00:22:41.539 "dhchap_key": "key3", 00:22:41.539 "allow_unrecognized_csi": false, 00:22:41.539 "method": "bdev_nvme_attach_controller", 00:22:41.539 "req_id": 1 00:22:41.539 } 00:22:41.539 Got JSON-RPC error response 00:22:41.539 response: 00:22:41.539 { 00:22:41.539 "code": -5, 00:22:41.539 "message": "Input/output error" 00:22:41.539 } 00:22:41.539 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:41.539 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:41.539 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:41.539 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:41.539 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:41.539 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:41.539 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:41.539 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:41.539 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:41.539 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:41.798 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:41.798 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.798 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.798 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.798 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:41.798 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.798 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.798 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.798 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:41.798 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:41.798 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:41.798 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:41.798 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.798 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:41.798 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.798 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:41.798 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:41.798 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:42.057 request: 00:22:42.057 { 00:22:42.057 "name": "nvme0", 00:22:42.057 "trtype": "tcp", 00:22:42.057 "traddr": "10.0.0.2", 00:22:42.057 "adrfam": "ipv4", 00:22:42.057 "trsvcid": "4420", 00:22:42.057 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:42.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:42.057 "prchk_reftag": false, 00:22:42.057 "prchk_guard": false, 00:22:42.057 "hdgst": false, 00:22:42.057 "ddgst": false, 00:22:42.057 "dhchap_key": "key0", 00:22:42.057 "dhchap_ctrlr_key": "key1", 00:22:42.057 "allow_unrecognized_csi": false, 00:22:42.057 "method": "bdev_nvme_attach_controller", 00:22:42.057 "req_id": 1 00:22:42.057 } 00:22:42.057 Got JSON-RPC error response 00:22:42.057 response: 00:22:42.057 { 00:22:42.057 "code": -5, 00:22:42.057 "message": "Input/output error" 00:22:42.057 } 00:22:42.057 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:42.057 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:42.057 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:42.057 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:42.057 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:42.057 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:42.057 14:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:42.316 nvme0n1 00:22:42.316 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:42.316 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:42.316 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.575 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.575 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.575 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.834 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:22:42.834 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.834 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.834 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.834 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:42.834 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:42.834 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:43.769 nvme0n1 00:22:43.769 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:43.769 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:43.769 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.769 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.769 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:43.769 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.769 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.769 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.769 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:43.769 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:43.769 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.027 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.027 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:22:44.027 14:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: --dhchap-ctrl-secret DHHC-1:03:MzA3NmIwMDE1YTBlNGZmMmQ2ZjJmMzRhYWU0NWU0ODMyYWJlM2M5MjQ2OGEwMWZiZjIxZjc3NDI3Zjc3OTQ1ZmFaxH4=: 00:22:44.593 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:44.593 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:44.593 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:44.593 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:44.593 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:44.593 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:44.593 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:44.593 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.593 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.593 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:44.593 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:44.593 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:44.593 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:44.593 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:44.593 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:44.851 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:44.851 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:44.851 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:44.851 14:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:45.108 request: 00:22:45.108 { 00:22:45.108 "name": "nvme0", 00:22:45.108 "trtype": "tcp", 00:22:45.108 "traddr": "10.0.0.2", 00:22:45.108 "adrfam": "ipv4", 00:22:45.108 "trsvcid": "4420", 00:22:45.108 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:45.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:45.108 "prchk_reftag": false, 00:22:45.108 "prchk_guard": false, 00:22:45.108 "hdgst": false, 00:22:45.108 "ddgst": false, 00:22:45.108 "dhchap_key": "key1", 00:22:45.108 "allow_unrecognized_csi": false, 00:22:45.108 "method": "bdev_nvme_attach_controller", 00:22:45.108 "req_id": 1 00:22:45.108 } 00:22:45.108 Got JSON-RPC error response 00:22:45.108 response: 00:22:45.108 { 00:22:45.108 "code": -5, 00:22:45.108 "message": "Input/output error" 00:22:45.108 } 00:22:45.108 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:45.108 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:45.108 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:45.108 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:45.108 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:45.108 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:45.108 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:46.041 nvme0n1 00:22:46.041 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:46.041 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:46.041 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.041 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.041 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.041 14:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.300 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:46.300 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.300 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.300 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.300 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:46.300 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:46.300 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:46.559 nvme0n1 00:22:46.559 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:46.559 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:46.559 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.819 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.819 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.819 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.078 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:47.078 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.078 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.078 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.078 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: '' 2s 00:22:47.078 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:47.078 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:47.078 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: 00:22:47.078 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:47.078 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:47.078 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:47.078 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: ]] 00:22:47.078 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MjE5MjMxOTllOWExY2FjMTdmZTMxYTZhYzU0OTU4OWItZSu/: 00:22:47.078 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:47.078 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:47.078 14:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:49.092 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:49.092 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:49.092 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:49.092 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:49.092 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:49.092 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:49.092 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:49.092 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:49.092 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.092 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.092 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.092 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: 2s 00:22:49.092 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:49.092 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:49.093 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:49.093 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: 00:22:49.093 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:49.093 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:49.093 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:49.093 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: ]] 00:22:49.093 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NzhhZDU2MDEyOTg2M2M1NjFjN2QzYTA2NjkxOWMxZGM2OWI3ZGI2YmEzNjVlZmRk9SHYCA==: 00:22:49.093 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:49.093 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:50.997 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:50.997 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:50.997 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:50.997 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:50.997 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:50.997 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:50.997 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:50.997 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.256 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:51.256 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.256 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.256 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.256 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:51.257 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:51.257 14:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:51.823 nvme0n1 00:22:51.823 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:51.823 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.823 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.823 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.823 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:51.824 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:52.391 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:52.391 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.391 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:52.650 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.650 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:52.650 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.650 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.650 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.650 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:52.650 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:52.909 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:52.909 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:52.909 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.909 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.909 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:52.909 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.909 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.909 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.909 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:52.909 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:52.909 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:52.909 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:53.168 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.168 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:53.168 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.168 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:53.168 14:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:53.427 request: 00:22:53.427 { 00:22:53.427 "name": "nvme0", 00:22:53.427 "dhchap_key": "key1", 00:22:53.427 "dhchap_ctrlr_key": "key3", 00:22:53.427 "method": "bdev_nvme_set_keys", 00:22:53.427 "req_id": 1 00:22:53.427 } 00:22:53.427 Got JSON-RPC error response 00:22:53.427 response: 00:22:53.427 { 00:22:53.427 "code": -13, 00:22:53.427 "message": "Permission denied" 00:22:53.427 } 00:22:53.427 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:53.427 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:53.427 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:53.427 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:53.427 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:53.427 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:53.427 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.686 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:53.686 14:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:54.623 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:54.623 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:54.623 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.882 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:54.882 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:54.882 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.882 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.882 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.882 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:54.882 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:54.882 14:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:55.819 nvme0n1 00:22:55.819 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:55.819 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.819 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.819 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.819 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:55.819 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:55.819 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:55.819 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:55.819 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.819 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:55.819 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.819 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:55.819 14:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:56.078 request: 00:22:56.078 { 00:22:56.078 "name": "nvme0", 00:22:56.078 "dhchap_key": "key2", 00:22:56.078 "dhchap_ctrlr_key": "key0", 00:22:56.078 "method": "bdev_nvme_set_keys", 00:22:56.078 "req_id": 1 00:22:56.078 } 00:22:56.078 Got JSON-RPC error response 00:22:56.078 response: 00:22:56.078 { 00:22:56.078 "code": -13, 00:22:56.078 "message": "Permission denied" 00:22:56.078 } 00:22:56.078 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:56.078 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:56.078 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:56.078 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:56.078 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:56.078 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:56.078 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.337 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:56.337 14:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:57.714 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:57.714 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:57.714 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.714 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:57.714 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:57.714 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:57.714 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1574939 00:22:57.714 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1574939 ']' 00:22:57.714 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1574939 00:22:57.714 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:57.714 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.714 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1574939 00:22:57.714 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:57.714 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:57.714 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1574939' 00:22:57.714 killing process with pid 1574939 00:22:57.714 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1574939 00:22:57.714 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1574939 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:57.974 rmmod nvme_tcp 00:22:57.974 rmmod nvme_fabrics 00:22:57.974 rmmod nvme_keyring 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1596630 ']' 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1596630 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1596630 ']' 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1596630 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1596630 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1596630' 00:22:57.974 killing process with pid 1596630 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1596630 00:22:57.974 14:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1596630 00:22:58.233 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:58.234 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:58.234 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:58.234 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:58.234 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:58.234 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:58.234 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:58.234 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:58.234 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:58.234 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.234 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:58.234 14:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.F7e /tmp/spdk.key-sha256.gey /tmp/spdk.key-sha384.RM4 /tmp/spdk.key-sha512.bXe /tmp/spdk.key-sha512.mk6 /tmp/spdk.key-sha384.ADY /tmp/spdk.key-sha256.gV9 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:00.769 00:23:00.769 real 2m33.838s 00:23:00.769 user 5m54.899s 00:23:00.769 sys 0m24.327s 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.769 ************************************ 00:23:00.769 END TEST nvmf_auth_target 00:23:00.769 ************************************ 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:00.769 ************************************ 00:23:00.769 START TEST nvmf_bdevio_no_huge 00:23:00.769 ************************************ 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:00.769 * Looking for test storage... 00:23:00.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:00.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.769 --rc genhtml_branch_coverage=1 00:23:00.769 --rc genhtml_function_coverage=1 00:23:00.769 --rc genhtml_legend=1 00:23:00.769 --rc geninfo_all_blocks=1 00:23:00.769 --rc geninfo_unexecuted_blocks=1 00:23:00.769 00:23:00.769 ' 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:00.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.769 --rc genhtml_branch_coverage=1 00:23:00.769 --rc genhtml_function_coverage=1 00:23:00.769 --rc genhtml_legend=1 00:23:00.769 --rc geninfo_all_blocks=1 00:23:00.769 --rc geninfo_unexecuted_blocks=1 00:23:00.769 00:23:00.769 ' 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:00.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.769 --rc genhtml_branch_coverage=1 00:23:00.769 --rc genhtml_function_coverage=1 00:23:00.769 --rc genhtml_legend=1 00:23:00.769 --rc geninfo_all_blocks=1 00:23:00.769 --rc geninfo_unexecuted_blocks=1 00:23:00.769 00:23:00.769 ' 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:00.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.769 --rc genhtml_branch_coverage=1 00:23:00.769 --rc genhtml_function_coverage=1 00:23:00.769 --rc genhtml_legend=1 00:23:00.769 --rc geninfo_all_blocks=1 00:23:00.769 --rc geninfo_unexecuted_blocks=1 00:23:00.769 00:23:00.769 ' 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:00.769 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:00.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:00.770 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:07.339 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:07.339 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.339 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:07.340 Found net devices under 0000:86:00.0: cvl_0_0 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:07.340 Found net devices under 0000:86:00.1: cvl_0_1 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:07.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:23:07.340 00:23:07.340 --- 10.0.0.2 ping statistics --- 00:23:07.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.340 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:23:07.340 00:23:07.340 --- 10.0.0.1 ping statistics --- 00:23:07.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.340 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1604056 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1604056 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1604056 ']' 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.340 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:07.340 [2024-11-20 14:42:18.433564] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:07.340 [2024-11-20 14:42:18.433617] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:07.340 [2024-11-20 14:42:18.524094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:07.340 [2024-11-20 14:42:18.570975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.340 [2024-11-20 14:42:18.571010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.340 [2024-11-20 14:42:18.571017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.340 [2024-11-20 14:42:18.571023] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.340 [2024-11-20 14:42:18.571029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.340 [2024-11-20 14:42:18.572212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:07.340 [2024-11-20 14:42:18.572318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:07.340 [2024-11-20 14:42:18.572424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.340 [2024-11-20 14:42:18.572425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:07.340 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.340 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:23:07.340 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:07.340 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:07.340 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:07.599 [2024-11-20 14:42:19.330706] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:07.599 Malloc0 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:07.599 [2024-11-20 14:42:19.374973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:07.599 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:07.599 { 00:23:07.599 "params": { 00:23:07.599 "name": "Nvme$subsystem", 00:23:07.599 "trtype": "$TEST_TRANSPORT", 00:23:07.599 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.599 "adrfam": "ipv4", 00:23:07.599 "trsvcid": "$NVMF_PORT", 00:23:07.599 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.599 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.599 "hdgst": ${hdgst:-false}, 00:23:07.599 "ddgst": ${ddgst:-false} 00:23:07.599 }, 00:23:07.599 "method": "bdev_nvme_attach_controller" 00:23:07.599 } 00:23:07.600 EOF 00:23:07.600 )") 00:23:07.600 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:23:07.600 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:23:07.600 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:23:07.600 14:42:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:07.600 "params": { 00:23:07.600 "name": "Nvme1", 00:23:07.600 "trtype": "tcp", 00:23:07.600 "traddr": "10.0.0.2", 00:23:07.600 "adrfam": "ipv4", 00:23:07.600 "trsvcid": "4420", 00:23:07.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.600 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:07.600 "hdgst": false, 00:23:07.600 "ddgst": false 00:23:07.600 }, 00:23:07.600 "method": "bdev_nvme_attach_controller" 00:23:07.600 }' 00:23:07.600 [2024-11-20 14:42:19.427635] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:07.600 [2024-11-20 14:42:19.427682] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1604138 ] 00:23:07.600 [2024-11-20 14:42:19.508262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:07.858 [2024-11-20 14:42:19.557457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.858 [2024-11-20 14:42:19.557564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.858 [2024-11-20 14:42:19.557565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.858 I/O targets: 00:23:07.858 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:07.858 00:23:07.858 00:23:07.858 CUnit - A unit testing framework for C - Version 2.1-3 00:23:07.858 http://cunit.sourceforge.net/ 00:23:07.858 00:23:07.858 00:23:07.858 Suite: bdevio tests on: Nvme1n1 00:23:07.858 Test: blockdev write read block ...passed 00:23:08.118 Test: blockdev write zeroes read block ...passed 00:23:08.118 Test: blockdev write zeroes read no split ...passed 00:23:08.118 Test: blockdev write zeroes read split ...passed 00:23:08.118 Test: blockdev write zeroes read split partial ...passed 00:23:08.118 Test: blockdev reset ...[2024-11-20 14:42:19.887307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:08.118 [2024-11-20 14:42:19.887372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1516920 (9): Bad file descriptor 00:23:08.118 [2024-11-20 14:42:20.023300] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:23:08.118 passed 00:23:08.118 Test: blockdev write read 8 blocks ...passed 00:23:08.118 Test: blockdev write read size > 128k ...passed 00:23:08.118 Test: blockdev write read invalid size ...passed 00:23:08.378 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:08.378 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:08.378 Test: blockdev write read max offset ...passed 00:23:08.378 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:08.378 Test: blockdev writev readv 8 blocks ...passed 00:23:08.378 Test: blockdev writev readv 30 x 1block ...passed 00:23:08.378 Test: blockdev writev readv block ...passed 00:23:08.378 Test: blockdev writev readv size > 128k ...passed 00:23:08.378 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:08.378 Test: blockdev comparev and writev ...[2024-11-20 14:42:20.316919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.378 [2024-11-20 14:42:20.316953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.378 [2024-11-20 14:42:20.316967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.378 [2024-11-20 14:42:20.316975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:08.378 [2024-11-20 14:42:20.317214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.378 [2024-11-20 14:42:20.317225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:08.378 [2024-11-20 14:42:20.317236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.378 [2024-11-20 14:42:20.317243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:08.378 [2024-11-20 14:42:20.317458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.378 [2024-11-20 14:42:20.317468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:08.378 [2024-11-20 14:42:20.317480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.378 [2024-11-20 14:42:20.317487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:08.378 [2024-11-20 14:42:20.317723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.378 [2024-11-20 14:42:20.317733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:08.378 [2024-11-20 14:42:20.317744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.378 [2024-11-20 14:42:20.317751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:08.638 passed 00:23:08.638 Test: blockdev nvme passthru rw ...passed 00:23:08.638 Test: blockdev nvme passthru vendor specific ...[2024-11-20 14:42:20.399315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:08.638 [2024-11-20 14:42:20.399332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:08.638 [2024-11-20 14:42:20.399440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:08.638 [2024-11-20 14:42:20.399449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:08.638 [2024-11-20 14:42:20.399556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:08.638 [2024-11-20 14:42:20.399566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:08.638 [2024-11-20 14:42:20.399665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:08.638 [2024-11-20 14:42:20.399674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:08.638 passed 00:23:08.638 Test: blockdev nvme admin passthru ...passed 00:23:08.638 Test: blockdev copy ...passed 00:23:08.638 00:23:08.638 Run Summary: Type Total Ran Passed Failed Inactive 00:23:08.638 suites 1 1 n/a 0 0 00:23:08.638 tests 23 23 23 0 0 00:23:08.638 asserts 152 152 152 0 n/a 00:23:08.638 00:23:08.638 Elapsed time = 1.398 seconds 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:08.898 rmmod nvme_tcp 00:23:08.898 rmmod nvme_fabrics 00:23:08.898 rmmod nvme_keyring 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1604056 ']' 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1604056 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1604056 ']' 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1604056 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1604056 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1604056' 00:23:08.898 killing process with pid 1604056 00:23:08.898 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1604056 00:23:09.157 14:42:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1604056 00:23:09.417 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:09.417 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:09.417 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:09.417 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:09.417 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:23:09.417 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:09.417 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:23:09.417 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:09.417 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:09.417 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.417 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.417 14:42:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.324 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:11.324 00:23:11.324 real 0m10.998s 00:23:11.324 user 0m14.297s 00:23:11.324 sys 0m5.437s 00:23:11.324 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:11.324 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:11.324 ************************************ 00:23:11.324 END TEST nvmf_bdevio_no_huge 00:23:11.324 ************************************ 00:23:11.324 14:42:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:11.324 14:42:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:11.324 14:42:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.324 14:42:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:11.584 ************************************ 00:23:11.584 START TEST nvmf_tls 00:23:11.584 ************************************ 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:11.584 * Looking for test storage... 00:23:11.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:11.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.584 --rc genhtml_branch_coverage=1 00:23:11.584 --rc genhtml_function_coverage=1 00:23:11.584 --rc genhtml_legend=1 00:23:11.584 --rc geninfo_all_blocks=1 00:23:11.584 --rc geninfo_unexecuted_blocks=1 00:23:11.584 00:23:11.584 ' 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:11.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.584 --rc genhtml_branch_coverage=1 00:23:11.584 --rc genhtml_function_coverage=1 00:23:11.584 --rc genhtml_legend=1 00:23:11.584 --rc geninfo_all_blocks=1 00:23:11.584 --rc geninfo_unexecuted_blocks=1 00:23:11.584 00:23:11.584 ' 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:11.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.584 --rc genhtml_branch_coverage=1 00:23:11.584 --rc genhtml_function_coverage=1 00:23:11.584 --rc genhtml_legend=1 00:23:11.584 --rc geninfo_all_blocks=1 00:23:11.584 --rc geninfo_unexecuted_blocks=1 00:23:11.584 00:23:11.584 ' 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:11.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.584 --rc genhtml_branch_coverage=1 00:23:11.584 --rc genhtml_function_coverage=1 00:23:11.584 --rc genhtml_legend=1 00:23:11.584 --rc geninfo_all_blocks=1 00:23:11.584 --rc geninfo_unexecuted_blocks=1 00:23:11.584 00:23:11.584 ' 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.584 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:11.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:11.585 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.160 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:18.161 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:18.161 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:18.161 Found net devices under 0000:86:00.0: cvl_0_0 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:18.161 Found net devices under 0000:86:00.1: cvl_0_1 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:18.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:23:18.161 00:23:18.161 --- 10.0.0.2 ping statistics --- 00:23:18.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.161 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:23:18.161 00:23:18.161 --- 10.0.0.1 ping statistics --- 00:23:18.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.161 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:23:18.161 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:18.162 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.162 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:18.162 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:18.162 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.162 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:18.162 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:18.162 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:18.162 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:18.162 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:18.162 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.162 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1607958 00:23:18.162 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:18.162 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1607958 00:23:18.162 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1607958 ']' 00:23:18.162 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.162 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.162 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.162 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.162 14:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.162 [2024-11-20 14:42:29.549380] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:18.162 [2024-11-20 14:42:29.549430] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.162 [2024-11-20 14:42:29.630994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.162 [2024-11-20 14:42:29.672230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.162 [2024-11-20 14:42:29.672267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.162 [2024-11-20 14:42:29.672274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.162 [2024-11-20 14:42:29.672280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.162 [2024-11-20 14:42:29.672288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.162 [2024-11-20 14:42:29.672875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.422 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.422 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:18.422 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:18.422 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:18.422 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.681 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.681 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:18.681 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:18.681 true 00:23:18.681 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:18.681 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:18.940 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:18.940 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:18.940 14:42:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:19.199 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:19.199 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:19.458 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:19.458 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:19.458 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:19.458 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:19.458 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:19.716 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:19.716 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:19.716 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:19.716 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:19.975 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:19.975 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:19.975 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:20.234 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:20.234 14:42:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:20.234 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:20.234 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:20.234 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:20.493 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:20.493 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.TrQV1IGQK4 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.6h7yzah5HW 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.TrQV1IGQK4 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.6h7yzah5HW 00:23:20.752 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:21.011 14:42:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:21.270 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.TrQV1IGQK4 00:23:21.270 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TrQV1IGQK4 00:23:21.270 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:21.529 [2024-11-20 14:42:33.229167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.529 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:21.529 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:21.788 [2024-11-20 14:42:33.598122] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:21.788 [2024-11-20 14:42:33.598339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.788 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:22.047 malloc0 00:23:22.047 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:22.047 14:42:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TrQV1IGQK4 00:23:22.306 14:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:22.566 14:42:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.TrQV1IGQK4 00:23:32.543 Initializing NVMe Controllers 00:23:32.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:32.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:32.543 Initialization complete. Launching workers. 00:23:32.543 ======================================================== 00:23:32.543 Latency(us) 00:23:32.543 Device Information : IOPS MiB/s Average min max 00:23:32.543 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16137.50 63.04 3966.03 810.49 6247.75 00:23:32.543 ======================================================== 00:23:32.543 Total : 16137.50 63.04 3966.03 810.49 6247.75 00:23:32.543 00:23:32.543 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TrQV1IGQK4 00:23:32.543 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:32.543 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:32.543 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:32.543 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TrQV1IGQK4 00:23:32.543 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:32.543 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1610424 00:23:32.543 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:32.543 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:32.543 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1610424 /var/tmp/bdevperf.sock 00:23:32.544 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1610424 ']' 00:23:32.544 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.544 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.544 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.544 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.544 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.803 [2024-11-20 14:42:44.533679] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:32.803 [2024-11-20 14:42:44.533729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610424 ] 00:23:32.803 [2024-11-20 14:42:44.609705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.803 [2024-11-20 14:42:44.651746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.803 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.803 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:32.803 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TrQV1IGQK4 00:23:33.061 14:42:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:33.320 [2024-11-20 14:42:45.133079] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.320 TLSTESTn1 00:23:33.320 14:42:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:33.579 Running I/O for 10 seconds... 00:23:35.454 5210.00 IOPS, 20.35 MiB/s [2024-11-20T13:42:48.350Z] 5388.50 IOPS, 21.05 MiB/s [2024-11-20T13:42:49.741Z] 5400.67 IOPS, 21.10 MiB/s [2024-11-20T13:42:50.679Z] 5440.25 IOPS, 21.25 MiB/s [2024-11-20T13:42:51.617Z] 5451.40 IOPS, 21.29 MiB/s [2024-11-20T13:42:52.556Z] 5455.00 IOPS, 21.31 MiB/s [2024-11-20T13:42:53.492Z] 5462.86 IOPS, 21.34 MiB/s [2024-11-20T13:42:54.430Z] 5476.88 IOPS, 21.39 MiB/s [2024-11-20T13:42:55.502Z] 5481.56 IOPS, 21.41 MiB/s [2024-11-20T13:42:55.502Z] 5473.50 IOPS, 21.38 MiB/s 00:23:43.544 Latency(us) 00:23:43.544 [2024-11-20T13:42:55.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.544 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:43.544 Verification LBA range: start 0x0 length 0x2000 00:23:43.544 TLSTESTn1 : 10.02 5473.61 21.38 0.00 0.00 23342.99 6525.11 36928.11 00:23:43.544 [2024-11-20T13:42:55.502Z] =================================================================================================================== 00:23:43.544 [2024-11-20T13:42:55.502Z] Total : 5473.61 21.38 0.00 0.00 23342.99 6525.11 36928.11 00:23:43.544 { 00:23:43.544 "results": [ 00:23:43.544 { 00:23:43.544 "job": "TLSTESTn1", 00:23:43.544 "core_mask": "0x4", 00:23:43.544 "workload": "verify", 00:23:43.544 "status": "finished", 00:23:43.544 "verify_range": { 00:23:43.544 "start": 0, 00:23:43.544 "length": 8192 00:23:43.544 }, 00:23:43.544 "queue_depth": 128, 00:23:43.544 "io_size": 4096, 00:23:43.544 "runtime": 10.022998, 00:23:43.544 "iops": 5473.611787610853, 00:23:43.544 "mibps": 21.381296045354894, 00:23:43.544 "io_failed": 0, 00:23:43.544 "io_timeout": 0, 00:23:43.544 "avg_latency_us": 23342.985147682804, 00:23:43.544 "min_latency_us": 6525.106086956522, 00:23:43.544 "max_latency_us": 36928.111304347825 00:23:43.544 } 00:23:43.544 ], 00:23:43.544 "core_count": 1 00:23:43.544 } 00:23:43.544 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:43.544 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1610424 00:23:43.544 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1610424 ']' 00:23:43.544 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1610424 00:23:43.544 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:43.544 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.544 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1610424 00:23:43.544 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:43.544 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:43.544 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1610424' 00:23:43.544 killing process with pid 1610424 00:23:43.544 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1610424 00:23:43.544 Received shutdown signal, test time was about 10.000000 seconds 00:23:43.544 00:23:43.544 Latency(us) 00:23:43.544 [2024-11-20T13:42:55.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.544 [2024-11-20T13:42:55.502Z] =================================================================================================================== 00:23:43.544 [2024-11-20T13:42:55.502Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:43.544 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1610424 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6h7yzah5HW 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6h7yzah5HW 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6h7yzah5HW 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6h7yzah5HW 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1612257 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1612257 /var/tmp/bdevperf.sock 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1612257 ']' 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:43.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.804 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.804 [2024-11-20 14:42:55.636429] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:43.804 [2024-11-20 14:42:55.636477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612257 ] 00:23:43.804 [2024-11-20 14:42:55.712394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.804 [2024-11-20 14:42:55.749806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.063 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.063 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:44.063 14:42:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6h7yzah5HW 00:23:44.322 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:44.322 [2024-11-20 14:42:56.210516] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:44.322 [2024-11-20 14:42:56.215342] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:44.322 [2024-11-20 14:42:56.215961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153f170 (107): Transport endpoint is not connected 00:23:44.322 [2024-11-20 14:42:56.216954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153f170 (9): Bad file descriptor 00:23:44.322 [2024-11-20 14:42:56.217952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:44.322 [2024-11-20 14:42:56.217961] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:44.322 [2024-11-20 14:42:56.217968] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:44.322 [2024-11-20 14:42:56.217979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:44.322 request: 00:23:44.322 { 00:23:44.322 "name": "TLSTEST", 00:23:44.323 "trtype": "tcp", 00:23:44.323 "traddr": "10.0.0.2", 00:23:44.323 "adrfam": "ipv4", 00:23:44.323 "trsvcid": "4420", 00:23:44.323 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:44.323 "prchk_reftag": false, 00:23:44.323 "prchk_guard": false, 00:23:44.323 "hdgst": false, 00:23:44.323 "ddgst": false, 00:23:44.323 "psk": "key0", 00:23:44.323 "allow_unrecognized_csi": false, 00:23:44.323 "method": "bdev_nvme_attach_controller", 00:23:44.323 "req_id": 1 00:23:44.323 } 00:23:44.323 Got JSON-RPC error response 00:23:44.323 response: 00:23:44.323 { 00:23:44.323 "code": -5, 00:23:44.323 "message": "Input/output error" 00:23:44.323 } 00:23:44.323 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1612257 00:23:44.323 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1612257 ']' 00:23:44.323 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1612257 00:23:44.323 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:44.323 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.323 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1612257 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1612257' 00:23:44.582 killing process with pid 1612257 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1612257 00:23:44.582 Received shutdown signal, test time was about 10.000000 seconds 00:23:44.582 00:23:44.582 Latency(us) 00:23:44.582 [2024-11-20T13:42:56.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.582 [2024-11-20T13:42:56.540Z] =================================================================================================================== 00:23:44.582 [2024-11-20T13:42:56.540Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1612257 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TrQV1IGQK4 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TrQV1IGQK4 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TrQV1IGQK4 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TrQV1IGQK4 00:23:44.582 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:44.583 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1612390 00:23:44.583 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:44.583 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:44.583 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1612390 /var/tmp/bdevperf.sock 00:23:44.583 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1612390 ']' 00:23:44.583 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.583 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.583 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.583 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.583 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.583 [2024-11-20 14:42:56.497443] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:44.583 [2024-11-20 14:42:56.497500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612390 ] 00:23:44.842 [2024-11-20 14:42:56.572494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.842 [2024-11-20 14:42:56.612356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.842 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.842 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:44.842 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TrQV1IGQK4 00:23:45.101 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:45.361 [2024-11-20 14:42:57.080838] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:45.361 [2024-11-20 14:42:57.086639] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:45.361 [2024-11-20 14:42:57.086660] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:45.361 [2024-11-20 14:42:57.086699] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:45.361 [2024-11-20 14:42:57.087229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a5170 (107): Transport endpoint is not connected 00:23:45.361 [2024-11-20 14:42:57.088222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a5170 (9): Bad file descriptor 00:23:45.361 [2024-11-20 14:42:57.089224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:45.361 [2024-11-20 14:42:57.089236] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:45.361 [2024-11-20 14:42:57.089244] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:45.361 [2024-11-20 14:42:57.089254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:45.361 request: 00:23:45.361 { 00:23:45.361 "name": "TLSTEST", 00:23:45.361 "trtype": "tcp", 00:23:45.361 "traddr": "10.0.0.2", 00:23:45.361 "adrfam": "ipv4", 00:23:45.361 "trsvcid": "4420", 00:23:45.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.361 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:45.361 "prchk_reftag": false, 00:23:45.361 "prchk_guard": false, 00:23:45.361 "hdgst": false, 00:23:45.361 "ddgst": false, 00:23:45.361 "psk": "key0", 00:23:45.361 "allow_unrecognized_csi": false, 00:23:45.361 "method": "bdev_nvme_attach_controller", 00:23:45.361 "req_id": 1 00:23:45.361 } 00:23:45.361 Got JSON-RPC error response 00:23:45.361 response: 00:23:45.361 { 00:23:45.361 "code": -5, 00:23:45.361 "message": "Input/output error" 00:23:45.361 } 00:23:45.361 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1612390 00:23:45.361 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1612390 ']' 00:23:45.361 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1612390 00:23:45.361 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:45.361 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.361 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1612390 00:23:45.361 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:45.361 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:45.361 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1612390' 00:23:45.361 killing process with pid 1612390 00:23:45.361 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1612390 00:23:45.361 Received shutdown signal, test time was about 10.000000 seconds 00:23:45.361 00:23:45.361 Latency(us) 00:23:45.361 [2024-11-20T13:42:57.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.361 [2024-11-20T13:42:57.319Z] =================================================================================================================== 00:23:45.361 [2024-11-20T13:42:57.319Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:45.361 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1612390 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TrQV1IGQK4 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TrQV1IGQK4 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TrQV1IGQK4 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TrQV1IGQK4 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1612509 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1612509 /var/tmp/bdevperf.sock 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1612509 ']' 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:45.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.621 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.621 [2024-11-20 14:42:57.374776] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:45.621 [2024-11-20 14:42:57.374824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612509 ] 00:23:45.621 [2024-11-20 14:42:57.450440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.621 [2024-11-20 14:42:57.491082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.880 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.880 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:45.880 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TrQV1IGQK4 00:23:45.880 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:46.140 [2024-11-20 14:42:57.979696] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:46.140 [2024-11-20 14:42:57.988890] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:46.140 [2024-11-20 14:42:57.988910] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:46.140 [2024-11-20 14:42:57.988933] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:46.140 [2024-11-20 14:42:57.989106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdc170 (107): Transport endpoint is not connected 00:23:46.140 [2024-11-20 14:42:57.990099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbdc170 (9): Bad file descriptor 00:23:46.140 [2024-11-20 14:42:57.991100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:46.140 [2024-11-20 14:42:57.991113] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:46.140 [2024-11-20 14:42:57.991121] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:46.140 [2024-11-20 14:42:57.991130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:46.140 request: 00:23:46.140 { 00:23:46.140 "name": "TLSTEST", 00:23:46.140 "trtype": "tcp", 00:23:46.140 "traddr": "10.0.0.2", 00:23:46.140 "adrfam": "ipv4", 00:23:46.140 "trsvcid": "4420", 00:23:46.140 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:46.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:46.140 "prchk_reftag": false, 00:23:46.140 "prchk_guard": false, 00:23:46.140 "hdgst": false, 00:23:46.140 "ddgst": false, 00:23:46.140 "psk": "key0", 00:23:46.140 "allow_unrecognized_csi": false, 00:23:46.140 "method": "bdev_nvme_attach_controller", 00:23:46.140 "req_id": 1 00:23:46.140 } 00:23:46.140 Got JSON-RPC error response 00:23:46.140 response: 00:23:46.140 { 00:23:46.140 "code": -5, 00:23:46.140 "message": "Input/output error" 00:23:46.140 } 00:23:46.140 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1612509 00:23:46.140 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1612509 ']' 00:23:46.140 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1612509 00:23:46.140 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:46.140 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.140 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1612509 00:23:46.140 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:46.140 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:46.140 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1612509' 00:23:46.140 killing process with pid 1612509 00:23:46.140 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1612509 00:23:46.140 Received shutdown signal, test time was about 10.000000 seconds 00:23:46.140 00:23:46.140 Latency(us) 00:23:46.140 [2024-11-20T13:42:58.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.140 [2024-11-20T13:42:58.098Z] =================================================================================================================== 00:23:46.140 [2024-11-20T13:42:58.098Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:46.140 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1612509 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1612745 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1612745 /var/tmp/bdevperf.sock 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1612745 ']' 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.399 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.399 [2024-11-20 14:42:58.274046] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:46.399 [2024-11-20 14:42:58.274099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612745 ] 00:23:46.399 [2024-11-20 14:42:58.350681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.658 [2024-11-20 14:42:58.388429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.658 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.658 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:46.659 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:46.918 [2024-11-20 14:42:58.667599] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:46.918 [2024-11-20 14:42:58.667631] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:46.918 request: 00:23:46.918 { 00:23:46.918 "name": "key0", 00:23:46.918 "path": "", 00:23:46.918 "method": "keyring_file_add_key", 00:23:46.918 "req_id": 1 00:23:46.918 } 00:23:46.918 Got JSON-RPC error response 00:23:46.918 response: 00:23:46.918 { 00:23:46.918 "code": -1, 00:23:46.918 "message": "Operation not permitted" 00:23:46.918 } 00:23:46.918 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:46.918 [2024-11-20 14:42:58.856183] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:46.918 [2024-11-20 14:42:58.856214] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:46.918 request: 00:23:46.918 { 00:23:46.918 "name": "TLSTEST", 00:23:46.918 "trtype": "tcp", 00:23:46.918 "traddr": "10.0.0.2", 00:23:46.918 "adrfam": "ipv4", 00:23:46.918 "trsvcid": "4420", 00:23:46.918 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.918 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:46.918 "prchk_reftag": false, 00:23:46.918 "prchk_guard": false, 00:23:46.918 "hdgst": false, 00:23:46.918 "ddgst": false, 00:23:46.918 "psk": "key0", 00:23:46.918 "allow_unrecognized_csi": false, 00:23:46.918 "method": "bdev_nvme_attach_controller", 00:23:46.918 "req_id": 1 00:23:46.918 } 00:23:46.918 Got JSON-RPC error response 00:23:46.918 response: 00:23:46.918 { 00:23:46.918 "code": -126, 00:23:46.918 "message": "Required key not available" 00:23:46.918 } 00:23:47.177 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1612745 00:23:47.177 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1612745 ']' 00:23:47.177 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1612745 00:23:47.177 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:47.177 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.177 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1612745 00:23:47.177 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:47.177 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:47.177 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1612745' 00:23:47.177 killing process with pid 1612745 00:23:47.178 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1612745 00:23:47.178 Received shutdown signal, test time was about 10.000000 seconds 00:23:47.178 00:23:47.178 Latency(us) 00:23:47.178 [2024-11-20T13:42:59.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.178 [2024-11-20T13:42:59.136Z] =================================================================================================================== 00:23:47.178 [2024-11-20T13:42:59.136Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:47.178 14:42:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1612745 00:23:47.178 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:47.178 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:47.178 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:47.178 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:47.178 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:47.178 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1607958 00:23:47.178 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1607958 ']' 00:23:47.178 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1607958 00:23:47.178 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:47.178 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.178 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1607958 00:23:47.450 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:47.450 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:47.450 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1607958' 00:23:47.450 killing process with pid 1607958 00:23:47.450 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1607958 00:23:47.450 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1607958 00:23:47.450 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:47.450 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:47.450 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:47.450 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:47.450 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:47.450 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:47.450 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:47.450 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:47.450 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:47.451 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.oqKdPWxmod 00:23:47.451 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:47.451 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.oqKdPWxmod 00:23:47.451 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:47.451 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:47.451 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:47.451 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.451 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1612987 00:23:47.451 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1612987 00:23:47.451 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:47.451 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1612987 ']' 00:23:47.451 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.451 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:47.451 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.451 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:47.451 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.720 [2024-11-20 14:42:59.419345] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:47.720 [2024-11-20 14:42:59.419393] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.721 [2024-11-20 14:42:59.495472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.721 [2024-11-20 14:42:59.532759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.721 [2024-11-20 14:42:59.532795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.721 [2024-11-20 14:42:59.532803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.721 [2024-11-20 14:42:59.532809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.721 [2024-11-20 14:42:59.532814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.721 [2024-11-20 14:42:59.533373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.721 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:47.721 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:47.721 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:47.721 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:47.721 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.721 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.721 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.oqKdPWxmod 00:23:47.721 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oqKdPWxmod 00:23:47.721 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:47.980 [2024-11-20 14:42:59.850381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.980 14:42:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:48.240 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:48.500 [2024-11-20 14:43:00.239424] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:48.500 [2024-11-20 14:43:00.239642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.500 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:48.500 malloc0 00:23:48.500 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:48.759 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oqKdPWxmod 00:23:49.018 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:49.278 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oqKdPWxmod 00:23:49.278 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:49.278 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:49.278 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:49.278 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.oqKdPWxmod 00:23:49.278 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:49.278 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:49.278 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1613250 00:23:49.278 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:49.278 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1613250 /var/tmp/bdevperf.sock 00:23:49.278 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1613250 ']' 00:23:49.278 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.278 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.278 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.278 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.278 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.278 [2024-11-20 14:43:01.050808] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:23:49.278 [2024-11-20 14:43:01.050854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1613250 ] 00:23:49.278 [2024-11-20 14:43:01.123214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.278 [2024-11-20 14:43:01.163240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.537 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.537 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:49.537 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oqKdPWxmod 00:23:49.537 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:49.797 [2024-11-20 14:43:01.631616] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:49.797 TLSTESTn1 00:23:49.797 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:50.056 Running I/O for 10 seconds... 00:23:51.927 5293.00 IOPS, 20.68 MiB/s [2024-11-20T13:43:04.894Z] 5362.50 IOPS, 20.95 MiB/s [2024-11-20T13:43:05.829Z] 5339.33 IOPS, 20.86 MiB/s [2024-11-20T13:43:07.207Z] 5300.75 IOPS, 20.71 MiB/s [2024-11-20T13:43:08.145Z] 5296.60 IOPS, 20.69 MiB/s [2024-11-20T13:43:09.082Z] 5284.83 IOPS, 20.64 MiB/s [2024-11-20T13:43:10.019Z] 5303.71 IOPS, 20.72 MiB/s [2024-11-20T13:43:10.953Z] 5319.62 IOPS, 20.78 MiB/s [2024-11-20T13:43:11.889Z] 5325.22 IOPS, 20.80 MiB/s [2024-11-20T13:43:11.889Z] 5323.00 IOPS, 20.79 MiB/s 00:23:59.931 Latency(us) 00:23:59.931 [2024-11-20T13:43:11.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.931 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:59.931 Verification LBA range: start 0x0 length 0x2000 00:23:59.931 TLSTESTn1 : 10.02 5326.92 20.81 0.00 0.00 23992.70 5955.23 23478.98 00:23:59.931 [2024-11-20T13:43:11.889Z] =================================================================================================================== 00:23:59.931 [2024-11-20T13:43:11.889Z] Total : 5326.92 20.81 0.00 0.00 23992.70 5955.23 23478.98 00:23:59.931 { 00:23:59.931 "results": [ 00:23:59.931 { 00:23:59.931 "job": "TLSTESTn1", 00:23:59.931 "core_mask": "0x4", 00:23:59.931 "workload": "verify", 00:23:59.931 "status": "finished", 00:23:59.931 "verify_range": { 00:23:59.931 "start": 0, 00:23:59.931 "length": 8192 00:23:59.931 }, 00:23:59.931 "queue_depth": 128, 00:23:59.931 "io_size": 4096, 00:23:59.931 "runtime": 10.016476, 00:23:59.931 "iops": 5326.9233610703, 00:23:59.931 "mibps": 20.80829437918086, 00:23:59.931 "io_failed": 0, 00:23:59.931 "io_timeout": 0, 00:23:59.931 "avg_latency_us": 23992.70278666016, 00:23:59.931 "min_latency_us": 5955.227826086956, 00:23:59.931 "max_latency_us": 23478.98434782609 00:23:59.931 } 00:23:59.931 ], 00:23:59.931 "core_count": 1 00:23:59.931 } 00:23:59.931 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:59.931 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1613250 00:23:59.931 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1613250 ']' 00:23:59.931 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1613250 00:23:59.931 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:59.931 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.931 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1613250 00:24:00.190 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:00.190 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:00.190 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1613250' 00:24:00.190 killing process with pid 1613250 00:24:00.190 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1613250 00:24:00.190 Received shutdown signal, test time was about 10.000000 seconds 00:24:00.190 00:24:00.190 Latency(us) 00:24:00.190 [2024-11-20T13:43:12.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.190 [2024-11-20T13:43:12.148Z] =================================================================================================================== 00:24:00.190 [2024-11-20T13:43:12.148Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.190 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1613250 00:24:00.190 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.oqKdPWxmod 00:24:00.190 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oqKdPWxmod 00:24:00.190 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:00.190 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oqKdPWxmod 00:24:00.190 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:00.190 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.190 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:00.190 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.190 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oqKdPWxmod 00:24:00.190 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:00.190 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:00.190 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:00.190 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.oqKdPWxmod 00:24:00.190 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:00.190 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1615030 00:24:00.191 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:00.191 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:00.191 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1615030 /var/tmp/bdevperf.sock 00:24:00.191 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1615030 ']' 00:24:00.191 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.191 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.191 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.191 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.191 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.191 [2024-11-20 14:43:12.123041] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:24:00.191 [2024-11-20 14:43:12.123090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615030 ] 00:24:00.449 [2024-11-20 14:43:12.188315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.449 [2024-11-20 14:43:12.231598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.449 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.449 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:00.449 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oqKdPWxmod 00:24:00.708 [2024-11-20 14:43:12.499937] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.oqKdPWxmod': 0100666 00:24:00.708 [2024-11-20 14:43:12.499967] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:00.708 request: 00:24:00.708 { 00:24:00.708 "name": "key0", 00:24:00.708 "path": "/tmp/tmp.oqKdPWxmod", 00:24:00.708 "method": "keyring_file_add_key", 00:24:00.708 "req_id": 1 00:24:00.708 } 00:24:00.708 Got JSON-RPC error response 00:24:00.708 response: 00:24:00.708 { 00:24:00.708 "code": -1, 00:24:00.708 "message": "Operation not permitted" 00:24:00.708 } 00:24:00.708 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:00.967 [2024-11-20 14:43:12.684502] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:00.967 [2024-11-20 14:43:12.684533] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:00.967 request: 00:24:00.967 { 00:24:00.967 "name": "TLSTEST", 00:24:00.967 "trtype": "tcp", 00:24:00.967 "traddr": "10.0.0.2", 00:24:00.967 "adrfam": "ipv4", 00:24:00.967 "trsvcid": "4420", 00:24:00.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:00.967 "prchk_reftag": false, 00:24:00.967 "prchk_guard": false, 00:24:00.967 "hdgst": false, 00:24:00.967 "ddgst": false, 00:24:00.967 "psk": "key0", 00:24:00.967 "allow_unrecognized_csi": false, 00:24:00.967 "method": "bdev_nvme_attach_controller", 00:24:00.967 "req_id": 1 00:24:00.967 } 00:24:00.967 Got JSON-RPC error response 00:24:00.967 response: 00:24:00.967 { 00:24:00.967 "code": -126, 00:24:00.967 "message": "Required key not available" 00:24:00.967 } 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1615030 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1615030 ']' 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1615030 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1615030 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1615030' 00:24:00.967 killing process with pid 1615030 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1615030 00:24:00.967 Received shutdown signal, test time was about 10.000000 seconds 00:24:00.967 00:24:00.967 Latency(us) 00:24:00.967 [2024-11-20T13:43:12.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.967 [2024-11-20T13:43:12.925Z] =================================================================================================================== 00:24:00.967 [2024-11-20T13:43:12.925Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1615030 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1612987 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1612987 ']' 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1612987 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.967 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1612987 00:24:01.227 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:01.227 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:01.227 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1612987' 00:24:01.227 killing process with pid 1612987 00:24:01.227 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1612987 00:24:01.227 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1612987 00:24:01.227 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:01.227 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:01.227 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:01.227 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.227 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1615108 00:24:01.227 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:01.227 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1615108 00:24:01.227 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1615108 ']' 00:24:01.227 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.227 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.227 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.227 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.227 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.227 [2024-11-20 14:43:13.182598] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:24:01.227 [2024-11-20 14:43:13.182645] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.486 [2024-11-20 14:43:13.263842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.486 [2024-11-20 14:43:13.304439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.486 [2024-11-20 14:43:13.304473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.486 [2024-11-20 14:43:13.304481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.486 [2024-11-20 14:43:13.304487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.486 [2024-11-20 14:43:13.304492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.486 [2024-11-20 14:43:13.305056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.486 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.486 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:01.486 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:01.486 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:01.486 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.486 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.486 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.oqKdPWxmod 00:24:01.486 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:01.486 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.oqKdPWxmod 00:24:01.486 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:24:01.486 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.486 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:24:01.486 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.486 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.oqKdPWxmod 00:24:01.486 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oqKdPWxmod 00:24:01.486 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:01.745 [2024-11-20 14:43:13.614715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.745 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:02.003 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:02.262 [2024-11-20 14:43:13.979664] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:02.262 [2024-11-20 14:43:13.979855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.262 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:02.262 malloc0 00:24:02.262 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:02.520 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oqKdPWxmod 00:24:02.778 [2024-11-20 14:43:14.549040] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.oqKdPWxmod': 0100666 00:24:02.778 [2024-11-20 14:43:14.549069] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:02.778 request: 00:24:02.778 { 00:24:02.778 "name": "key0", 00:24:02.778 "path": "/tmp/tmp.oqKdPWxmod", 00:24:02.778 "method": "keyring_file_add_key", 00:24:02.778 "req_id": 1 00:24:02.778 } 00:24:02.778 Got JSON-RPC error response 00:24:02.778 response: 00:24:02.778 { 00:24:02.778 "code": -1, 00:24:02.778 "message": "Operation not permitted" 00:24:02.778 } 00:24:02.778 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:02.778 [2024-11-20 14:43:14.733539] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:02.778 [2024-11-20 14:43:14.733576] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:03.037 request: 00:24:03.037 { 00:24:03.037 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.037 "host": "nqn.2016-06.io.spdk:host1", 00:24:03.037 "psk": "key0", 00:24:03.037 "method": "nvmf_subsystem_add_host", 00:24:03.037 "req_id": 1 00:24:03.037 } 00:24:03.037 Got JSON-RPC error response 00:24:03.037 response: 00:24:03.037 { 00:24:03.037 "code": -32603, 00:24:03.037 "message": "Internal error" 00:24:03.037 } 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1615108 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1615108 ']' 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1615108 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1615108 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1615108' 00:24:03.037 killing process with pid 1615108 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1615108 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1615108 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.oqKdPWxmod 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1615587 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1615587 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1615587 ']' 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.037 14:43:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.297 [2024-11-20 14:43:15.030585] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:24:03.297 [2024-11-20 14:43:15.030625] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.297 [2024-11-20 14:43:15.103855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.297 [2024-11-20 14:43:15.142416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.297 [2024-11-20 14:43:15.142450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.297 [2024-11-20 14:43:15.142457] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.297 [2024-11-20 14:43:15.142462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.297 [2024-11-20 14:43:15.142467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.297 [2024-11-20 14:43:15.143055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.297 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.297 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:03.297 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:03.297 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:03.297 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.556 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.556 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.oqKdPWxmod 00:24:03.556 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oqKdPWxmod 00:24:03.556 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:03.556 [2024-11-20 14:43:15.440142] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.556 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:03.814 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:04.074 [2024-11-20 14:43:15.825126] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:04.074 [2024-11-20 14:43:15.825348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.074 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:04.333 malloc0 00:24:04.333 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:04.333 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oqKdPWxmod 00:24:04.592 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:04.851 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:04.851 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1615846 00:24:04.851 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:04.851 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1615846 /var/tmp/bdevperf.sock 00:24:04.851 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1615846 ']' 00:24:04.851 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:04.851 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.851 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:04.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:04.851 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.851 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.851 [2024-11-20 14:43:16.680892] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:24:04.851 [2024-11-20 14:43:16.680940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615846 ] 00:24:04.851 [2024-11-20 14:43:16.754380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.851 [2024-11-20 14:43:16.794824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.110 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.110 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:05.110 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oqKdPWxmod 00:24:05.369 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:05.369 [2024-11-20 14:43:17.255220] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:05.628 TLSTESTn1 00:24:05.628 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:05.887 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:05.888 "subsystems": [ 00:24:05.888 { 00:24:05.888 "subsystem": "keyring", 00:24:05.888 "config": [ 00:24:05.888 { 00:24:05.888 "method": "keyring_file_add_key", 00:24:05.888 "params": { 00:24:05.888 "name": "key0", 00:24:05.888 "path": "/tmp/tmp.oqKdPWxmod" 00:24:05.888 } 00:24:05.888 } 00:24:05.888 ] 00:24:05.888 }, 00:24:05.888 { 00:24:05.888 "subsystem": "iobuf", 00:24:05.888 "config": [ 00:24:05.888 { 00:24:05.888 "method": "iobuf_set_options", 00:24:05.888 "params": { 00:24:05.888 "small_pool_count": 8192, 00:24:05.888 "large_pool_count": 1024, 00:24:05.888 "small_bufsize": 8192, 00:24:05.888 "large_bufsize": 135168, 00:24:05.888 "enable_numa": false 00:24:05.888 } 00:24:05.888 } 00:24:05.888 ] 00:24:05.888 }, 00:24:05.888 { 00:24:05.888 "subsystem": "sock", 00:24:05.888 "config": [ 00:24:05.888 { 00:24:05.888 "method": "sock_set_default_impl", 00:24:05.888 "params": { 00:24:05.888 "impl_name": "posix" 00:24:05.888 } 00:24:05.888 }, 00:24:05.888 { 00:24:05.888 "method": "sock_impl_set_options", 00:24:05.888 "params": { 00:24:05.888 "impl_name": "ssl", 00:24:05.888 "recv_buf_size": 4096, 00:24:05.888 "send_buf_size": 4096, 00:24:05.888 "enable_recv_pipe": true, 00:24:05.888 "enable_quickack": false, 00:24:05.888 "enable_placement_id": 0, 00:24:05.888 "enable_zerocopy_send_server": true, 00:24:05.888 "enable_zerocopy_send_client": false, 00:24:05.888 "zerocopy_threshold": 0, 00:24:05.888 "tls_version": 0, 00:24:05.888 "enable_ktls": false 00:24:05.888 } 00:24:05.888 }, 00:24:05.888 { 00:24:05.888 "method": "sock_impl_set_options", 00:24:05.888 "params": { 00:24:05.888 "impl_name": "posix", 00:24:05.888 "recv_buf_size": 2097152, 00:24:05.888 "send_buf_size": 2097152, 00:24:05.888 "enable_recv_pipe": true, 00:24:05.888 "enable_quickack": false, 00:24:05.888 "enable_placement_id": 0, 00:24:05.888 "enable_zerocopy_send_server": true, 00:24:05.888 "enable_zerocopy_send_client": false, 00:24:05.888 "zerocopy_threshold": 0, 00:24:05.888 "tls_version": 0, 00:24:05.888 "enable_ktls": false 00:24:05.888 } 00:24:05.888 } 00:24:05.888 ] 00:24:05.888 }, 00:24:05.888 { 00:24:05.888 "subsystem": "vmd", 00:24:05.888 "config": [] 00:24:05.888 }, 00:24:05.888 { 00:24:05.888 "subsystem": "accel", 00:24:05.888 "config": [ 00:24:05.888 { 00:24:05.888 "method": "accel_set_options", 00:24:05.888 "params": { 00:24:05.888 "small_cache_size": 128, 00:24:05.888 "large_cache_size": 16, 00:24:05.888 "task_count": 2048, 00:24:05.888 "sequence_count": 2048, 00:24:05.888 "buf_count": 2048 00:24:05.888 } 00:24:05.888 } 00:24:05.888 ] 00:24:05.888 }, 00:24:05.888 { 00:24:05.888 "subsystem": "bdev", 00:24:05.888 "config": [ 00:24:05.888 { 00:24:05.888 "method": "bdev_set_options", 00:24:05.888 "params": { 00:24:05.888 "bdev_io_pool_size": 65535, 00:24:05.888 "bdev_io_cache_size": 256, 00:24:05.888 "bdev_auto_examine": true, 00:24:05.888 "iobuf_small_cache_size": 128, 00:24:05.888 "iobuf_large_cache_size": 16 00:24:05.888 } 00:24:05.888 }, 00:24:05.888 { 00:24:05.888 "method": "bdev_raid_set_options", 00:24:05.888 "params": { 00:24:05.888 "process_window_size_kb": 1024, 00:24:05.888 "process_max_bandwidth_mb_sec": 0 00:24:05.888 } 00:24:05.888 }, 00:24:05.888 { 00:24:05.888 "method": "bdev_iscsi_set_options", 00:24:05.888 "params": { 00:24:05.888 "timeout_sec": 30 00:24:05.888 } 00:24:05.888 }, 00:24:05.888 { 00:24:05.888 "method": "bdev_nvme_set_options", 00:24:05.888 "params": { 00:24:05.888 "action_on_timeout": "none", 00:24:05.888 "timeout_us": 0, 00:24:05.888 "timeout_admin_us": 0, 00:24:05.888 "keep_alive_timeout_ms": 10000, 00:24:05.888 "arbitration_burst": 0, 00:24:05.888 "low_priority_weight": 0, 00:24:05.888 "medium_priority_weight": 0, 00:24:05.888 "high_priority_weight": 0, 00:24:05.888 "nvme_adminq_poll_period_us": 10000, 00:24:05.888 "nvme_ioq_poll_period_us": 0, 00:24:05.888 "io_queue_requests": 0, 00:24:05.888 "delay_cmd_submit": true, 00:24:05.888 "transport_retry_count": 4, 00:24:05.888 "bdev_retry_count": 3, 00:24:05.888 "transport_ack_timeout": 0, 00:24:05.888 "ctrlr_loss_timeout_sec": 0, 00:24:05.888 "reconnect_delay_sec": 0, 00:24:05.888 "fast_io_fail_timeout_sec": 0, 00:24:05.888 "disable_auto_failback": false, 00:24:05.888 "generate_uuids": false, 00:24:05.888 "transport_tos": 0, 00:24:05.888 "nvme_error_stat": false, 00:24:05.888 "rdma_srq_size": 0, 00:24:05.888 "io_path_stat": false, 00:24:05.888 "allow_accel_sequence": false, 00:24:05.888 "rdma_max_cq_size": 0, 00:24:05.888 "rdma_cm_event_timeout_ms": 0, 00:24:05.888 "dhchap_digests": [ 00:24:05.888 "sha256", 00:24:05.888 "sha384", 00:24:05.888 "sha512" 00:24:05.888 ], 00:24:05.888 "dhchap_dhgroups": [ 00:24:05.888 "null", 00:24:05.888 "ffdhe2048", 00:24:05.888 "ffdhe3072", 00:24:05.888 "ffdhe4096", 00:24:05.888 "ffdhe6144", 00:24:05.888 "ffdhe8192" 00:24:05.888 ] 00:24:05.888 } 00:24:05.888 }, 00:24:05.888 { 00:24:05.888 "method": "bdev_nvme_set_hotplug", 00:24:05.888 "params": { 00:24:05.888 "period_us": 100000, 00:24:05.888 "enable": false 00:24:05.888 } 00:24:05.888 }, 00:24:05.888 { 00:24:05.888 "method": "bdev_malloc_create", 00:24:05.888 "params": { 00:24:05.888 "name": "malloc0", 00:24:05.888 "num_blocks": 8192, 00:24:05.888 "block_size": 4096, 00:24:05.888 "physical_block_size": 4096, 00:24:05.888 "uuid": "1365a18a-8ae9-488b-94c7-5c20c3b657dc", 00:24:05.888 "optimal_io_boundary": 0, 00:24:05.888 "md_size": 0, 00:24:05.888 "dif_type": 0, 00:24:05.888 "dif_is_head_of_md": false, 00:24:05.888 "dif_pi_format": 0 00:24:05.888 } 00:24:05.888 }, 00:24:05.888 { 00:24:05.888 "method": "bdev_wait_for_examine" 00:24:05.888 } 00:24:05.888 ] 00:24:05.888 }, 00:24:05.888 { 00:24:05.888 "subsystem": "nbd", 00:24:05.888 "config": [] 00:24:05.888 }, 00:24:05.888 { 00:24:05.888 "subsystem": "scheduler", 00:24:05.888 "config": [ 00:24:05.888 { 00:24:05.888 "method": "framework_set_scheduler", 00:24:05.888 "params": { 00:24:05.888 "name": "static" 00:24:05.888 } 00:24:05.888 } 00:24:05.888 ] 00:24:05.888 }, 00:24:05.888 { 00:24:05.888 "subsystem": "nvmf", 00:24:05.888 "config": [ 00:24:05.888 { 00:24:05.888 "method": "nvmf_set_config", 00:24:05.888 "params": { 00:24:05.888 "discovery_filter": "match_any", 00:24:05.888 "admin_cmd_passthru": { 00:24:05.888 "identify_ctrlr": false 00:24:05.888 }, 00:24:05.888 "dhchap_digests": [ 00:24:05.888 "sha256", 00:24:05.888 "sha384", 00:24:05.888 "sha512" 00:24:05.888 ], 00:24:05.888 "dhchap_dhgroups": [ 00:24:05.888 "null", 00:24:05.888 "ffdhe2048", 00:24:05.888 "ffdhe3072", 00:24:05.888 "ffdhe4096", 00:24:05.888 "ffdhe6144", 00:24:05.888 "ffdhe8192" 00:24:05.888 ] 00:24:05.888 } 00:24:05.888 }, 00:24:05.888 { 00:24:05.888 "method": "nvmf_set_max_subsystems", 00:24:05.888 "params": { 00:24:05.888 "max_subsystems": 1024 00:24:05.888 } 00:24:05.888 }, 00:24:05.888 { 00:24:05.888 "method": "nvmf_set_crdt", 00:24:05.888 "params": { 00:24:05.888 "crdt1": 0, 00:24:05.888 "crdt2": 0, 00:24:05.888 "crdt3": 0 00:24:05.888 } 00:24:05.888 }, 00:24:05.889 { 00:24:05.889 "method": "nvmf_create_transport", 00:24:05.889 "params": { 00:24:05.889 "trtype": "TCP", 00:24:05.889 "max_queue_depth": 128, 00:24:05.889 "max_io_qpairs_per_ctrlr": 127, 00:24:05.889 "in_capsule_data_size": 4096, 00:24:05.889 "max_io_size": 131072, 00:24:05.889 "io_unit_size": 131072, 00:24:05.889 "max_aq_depth": 128, 00:24:05.889 "num_shared_buffers": 511, 00:24:05.889 "buf_cache_size": 4294967295, 00:24:05.889 "dif_insert_or_strip": false, 00:24:05.889 "zcopy": false, 00:24:05.889 "c2h_success": false, 00:24:05.889 "sock_priority": 0, 00:24:05.889 "abort_timeout_sec": 1, 00:24:05.889 "ack_timeout": 0, 00:24:05.889 "data_wr_pool_size": 0 00:24:05.889 } 00:24:05.889 }, 00:24:05.889 { 00:24:05.889 "method": "nvmf_create_subsystem", 00:24:05.889 "params": { 00:24:05.889 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.889 "allow_any_host": false, 00:24:05.889 "serial_number": "SPDK00000000000001", 00:24:05.889 "model_number": "SPDK bdev Controller", 00:24:05.889 "max_namespaces": 10, 00:24:05.889 "min_cntlid": 1, 00:24:05.889 "max_cntlid": 65519, 00:24:05.889 "ana_reporting": false 00:24:05.889 } 00:24:05.889 }, 00:24:05.889 { 00:24:05.889 "method": "nvmf_subsystem_add_host", 00:24:05.889 "params": { 00:24:05.889 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.889 "host": "nqn.2016-06.io.spdk:host1", 00:24:05.889 "psk": "key0" 00:24:05.889 } 00:24:05.889 }, 00:24:05.889 { 00:24:05.889 "method": "nvmf_subsystem_add_ns", 00:24:05.889 "params": { 00:24:05.889 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.889 "namespace": { 00:24:05.889 "nsid": 1, 00:24:05.889 "bdev_name": "malloc0", 00:24:05.889 "nguid": "1365A18A8AE9488B94C75C20C3B657DC", 00:24:05.889 "uuid": "1365a18a-8ae9-488b-94c7-5c20c3b657dc", 00:24:05.889 "no_auto_visible": false 00:24:05.889 } 00:24:05.889 } 00:24:05.889 }, 00:24:05.889 { 00:24:05.889 "method": "nvmf_subsystem_add_listener", 00:24:05.889 "params": { 00:24:05.889 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.889 "listen_address": { 00:24:05.889 "trtype": "TCP", 00:24:05.889 "adrfam": "IPv4", 00:24:05.889 "traddr": "10.0.0.2", 00:24:05.889 "trsvcid": "4420" 00:24:05.889 }, 00:24:05.889 "secure_channel": true 00:24:05.889 } 00:24:05.889 } 00:24:05.889 ] 00:24:05.889 } 00:24:05.889 ] 00:24:05.889 }' 00:24:05.889 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:06.148 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:06.148 "subsystems": [ 00:24:06.148 { 00:24:06.148 "subsystem": "keyring", 00:24:06.148 "config": [ 00:24:06.148 { 00:24:06.148 "method": "keyring_file_add_key", 00:24:06.148 "params": { 00:24:06.148 "name": "key0", 00:24:06.149 "path": "/tmp/tmp.oqKdPWxmod" 00:24:06.149 } 00:24:06.149 } 00:24:06.149 ] 00:24:06.149 }, 00:24:06.149 { 00:24:06.149 "subsystem": "iobuf", 00:24:06.149 "config": [ 00:24:06.149 { 00:24:06.149 "method": "iobuf_set_options", 00:24:06.149 "params": { 00:24:06.149 "small_pool_count": 8192, 00:24:06.149 "large_pool_count": 1024, 00:24:06.149 "small_bufsize": 8192, 00:24:06.149 "large_bufsize": 135168, 00:24:06.149 "enable_numa": false 00:24:06.149 } 00:24:06.149 } 00:24:06.149 ] 00:24:06.149 }, 00:24:06.149 { 00:24:06.149 "subsystem": "sock", 00:24:06.149 "config": [ 00:24:06.149 { 00:24:06.149 "method": "sock_set_default_impl", 00:24:06.149 "params": { 00:24:06.149 "impl_name": "posix" 00:24:06.149 } 00:24:06.149 }, 00:24:06.149 { 00:24:06.149 "method": "sock_impl_set_options", 00:24:06.149 "params": { 00:24:06.149 "impl_name": "ssl", 00:24:06.149 "recv_buf_size": 4096, 00:24:06.149 "send_buf_size": 4096, 00:24:06.149 "enable_recv_pipe": true, 00:24:06.149 "enable_quickack": false, 00:24:06.149 "enable_placement_id": 0, 00:24:06.149 "enable_zerocopy_send_server": true, 00:24:06.149 "enable_zerocopy_send_client": false, 00:24:06.149 "zerocopy_threshold": 0, 00:24:06.149 "tls_version": 0, 00:24:06.149 "enable_ktls": false 00:24:06.149 } 00:24:06.149 }, 00:24:06.149 { 00:24:06.149 "method": "sock_impl_set_options", 00:24:06.149 "params": { 00:24:06.149 "impl_name": "posix", 00:24:06.149 "recv_buf_size": 2097152, 00:24:06.149 "send_buf_size": 2097152, 00:24:06.149 "enable_recv_pipe": true, 00:24:06.149 "enable_quickack": false, 00:24:06.149 "enable_placement_id": 0, 00:24:06.149 "enable_zerocopy_send_server": true, 00:24:06.149 "enable_zerocopy_send_client": false, 00:24:06.149 "zerocopy_threshold": 0, 00:24:06.149 "tls_version": 0, 00:24:06.149 "enable_ktls": false 00:24:06.149 } 00:24:06.149 } 00:24:06.149 ] 00:24:06.149 }, 00:24:06.149 { 00:24:06.149 "subsystem": "vmd", 00:24:06.149 "config": [] 00:24:06.149 }, 00:24:06.149 { 00:24:06.149 "subsystem": "accel", 00:24:06.149 "config": [ 00:24:06.149 { 00:24:06.149 "method": "accel_set_options", 00:24:06.149 "params": { 00:24:06.149 "small_cache_size": 128, 00:24:06.149 "large_cache_size": 16, 00:24:06.149 "task_count": 2048, 00:24:06.149 "sequence_count": 2048, 00:24:06.149 "buf_count": 2048 00:24:06.149 } 00:24:06.149 } 00:24:06.149 ] 00:24:06.149 }, 00:24:06.149 { 00:24:06.149 "subsystem": "bdev", 00:24:06.149 "config": [ 00:24:06.149 { 00:24:06.149 "method": "bdev_set_options", 00:24:06.149 "params": { 00:24:06.149 "bdev_io_pool_size": 65535, 00:24:06.149 "bdev_io_cache_size": 256, 00:24:06.149 "bdev_auto_examine": true, 00:24:06.149 "iobuf_small_cache_size": 128, 00:24:06.149 "iobuf_large_cache_size": 16 00:24:06.149 } 00:24:06.149 }, 00:24:06.149 { 00:24:06.149 "method": "bdev_raid_set_options", 00:24:06.149 "params": { 00:24:06.149 "process_window_size_kb": 1024, 00:24:06.149 "process_max_bandwidth_mb_sec": 0 00:24:06.149 } 00:24:06.149 }, 00:24:06.149 { 00:24:06.149 "method": "bdev_iscsi_set_options", 00:24:06.149 "params": { 00:24:06.149 "timeout_sec": 30 00:24:06.149 } 00:24:06.149 }, 00:24:06.149 { 00:24:06.149 "method": "bdev_nvme_set_options", 00:24:06.149 "params": { 00:24:06.149 "action_on_timeout": "none", 00:24:06.149 "timeout_us": 0, 00:24:06.149 "timeout_admin_us": 0, 00:24:06.149 "keep_alive_timeout_ms": 10000, 00:24:06.149 "arbitration_burst": 0, 00:24:06.149 "low_priority_weight": 0, 00:24:06.149 "medium_priority_weight": 0, 00:24:06.149 "high_priority_weight": 0, 00:24:06.149 "nvme_adminq_poll_period_us": 10000, 00:24:06.149 "nvme_ioq_poll_period_us": 0, 00:24:06.149 "io_queue_requests": 512, 00:24:06.149 "delay_cmd_submit": true, 00:24:06.149 "transport_retry_count": 4, 00:24:06.149 "bdev_retry_count": 3, 00:24:06.149 "transport_ack_timeout": 0, 00:24:06.149 "ctrlr_loss_timeout_sec": 0, 00:24:06.149 "reconnect_delay_sec": 0, 00:24:06.149 "fast_io_fail_timeout_sec": 0, 00:24:06.149 "disable_auto_failback": false, 00:24:06.149 "generate_uuids": false, 00:24:06.149 "transport_tos": 0, 00:24:06.149 "nvme_error_stat": false, 00:24:06.149 "rdma_srq_size": 0, 00:24:06.149 "io_path_stat": false, 00:24:06.149 "allow_accel_sequence": false, 00:24:06.149 "rdma_max_cq_size": 0, 00:24:06.149 "rdma_cm_event_timeout_ms": 0, 00:24:06.149 "dhchap_digests": [ 00:24:06.149 "sha256", 00:24:06.149 "sha384", 00:24:06.149 "sha512" 00:24:06.149 ], 00:24:06.149 "dhchap_dhgroups": [ 00:24:06.149 "null", 00:24:06.149 "ffdhe2048", 00:24:06.149 "ffdhe3072", 00:24:06.149 "ffdhe4096", 00:24:06.149 "ffdhe6144", 00:24:06.149 "ffdhe8192" 00:24:06.149 ] 00:24:06.149 } 00:24:06.149 }, 00:24:06.149 { 00:24:06.149 "method": "bdev_nvme_attach_controller", 00:24:06.149 "params": { 00:24:06.149 "name": "TLSTEST", 00:24:06.149 "trtype": "TCP", 00:24:06.149 "adrfam": "IPv4", 00:24:06.149 "traddr": "10.0.0.2", 00:24:06.149 "trsvcid": "4420", 00:24:06.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.149 "prchk_reftag": false, 00:24:06.149 "prchk_guard": false, 00:24:06.149 "ctrlr_loss_timeout_sec": 0, 00:24:06.149 "reconnect_delay_sec": 0, 00:24:06.149 "fast_io_fail_timeout_sec": 0, 00:24:06.149 "psk": "key0", 00:24:06.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:06.149 "hdgst": false, 00:24:06.149 "ddgst": false, 00:24:06.149 "multipath": "multipath" 00:24:06.149 } 00:24:06.149 }, 00:24:06.149 { 00:24:06.149 "method": "bdev_nvme_set_hotplug", 00:24:06.149 "params": { 00:24:06.149 "period_us": 100000, 00:24:06.149 "enable": false 00:24:06.149 } 00:24:06.149 }, 00:24:06.149 { 00:24:06.149 "method": "bdev_wait_for_examine" 00:24:06.149 } 00:24:06.149 ] 00:24:06.149 }, 00:24:06.149 { 00:24:06.149 "subsystem": "nbd", 00:24:06.149 "config": [] 00:24:06.149 } 00:24:06.149 ] 00:24:06.149 }' 00:24:06.149 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1615846 00:24:06.149 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1615846 ']' 00:24:06.149 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1615846 00:24:06.149 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:06.149 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.149 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1615846 00:24:06.149 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:06.149 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:06.149 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1615846' 00:24:06.149 killing process with pid 1615846 00:24:06.149 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1615846 00:24:06.149 Received shutdown signal, test time was about 10.000000 seconds 00:24:06.149 00:24:06.149 Latency(us) 00:24:06.149 [2024-11-20T13:43:18.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.149 [2024-11-20T13:43:18.107Z] =================================================================================================================== 00:24:06.149 [2024-11-20T13:43:18.108Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:06.150 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1615846 00:24:06.409 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1615587 00:24:06.409 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1615587 ']' 00:24:06.409 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1615587 00:24:06.409 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:06.409 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.409 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1615587 00:24:06.409 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:06.409 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:06.409 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1615587' 00:24:06.409 killing process with pid 1615587 00:24:06.409 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1615587 00:24:06.409 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1615587 00:24:06.409 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:06.409 "subsystems": [ 00:24:06.409 { 00:24:06.409 "subsystem": "keyring", 00:24:06.409 "config": [ 00:24:06.409 { 00:24:06.409 "method": "keyring_file_add_key", 00:24:06.409 "params": { 00:24:06.409 "name": "key0", 00:24:06.409 "path": "/tmp/tmp.oqKdPWxmod" 00:24:06.409 } 00:24:06.409 } 00:24:06.409 ] 00:24:06.409 }, 00:24:06.409 { 00:24:06.409 "subsystem": "iobuf", 00:24:06.409 "config": [ 00:24:06.409 { 00:24:06.409 "method": "iobuf_set_options", 00:24:06.409 "params": { 00:24:06.410 "small_pool_count": 8192, 00:24:06.410 "large_pool_count": 1024, 00:24:06.410 "small_bufsize": 8192, 00:24:06.410 "large_bufsize": 135168, 00:24:06.410 "enable_numa": false 00:24:06.410 } 00:24:06.410 } 00:24:06.410 ] 00:24:06.410 }, 00:24:06.410 { 00:24:06.410 "subsystem": "sock", 00:24:06.410 "config": [ 00:24:06.410 { 00:24:06.410 "method": "sock_set_default_impl", 00:24:06.410 "params": { 00:24:06.410 "impl_name": "posix" 00:24:06.410 } 00:24:06.410 }, 00:24:06.410 { 00:24:06.410 "method": "sock_impl_set_options", 00:24:06.410 "params": { 00:24:06.410 "impl_name": "ssl", 00:24:06.410 "recv_buf_size": 4096, 00:24:06.410 "send_buf_size": 4096, 00:24:06.410 "enable_recv_pipe": true, 00:24:06.410 "enable_quickack": false, 00:24:06.410 "enable_placement_id": 0, 00:24:06.410 "enable_zerocopy_send_server": true, 00:24:06.410 "enable_zerocopy_send_client": false, 00:24:06.410 "zerocopy_threshold": 0, 00:24:06.410 "tls_version": 0, 00:24:06.410 "enable_ktls": false 00:24:06.410 } 00:24:06.410 }, 00:24:06.410 { 00:24:06.410 "method": "sock_impl_set_options", 00:24:06.410 "params": { 00:24:06.410 "impl_name": "posix", 00:24:06.410 "recv_buf_size": 2097152, 00:24:06.410 "send_buf_size": 2097152, 00:24:06.410 "enable_recv_pipe": true, 00:24:06.410 "enable_quickack": false, 00:24:06.410 "enable_placement_id": 0, 00:24:06.410 "enable_zerocopy_send_server": true, 00:24:06.410 "enable_zerocopy_send_client": false, 00:24:06.410 "zerocopy_threshold": 0, 00:24:06.410 "tls_version": 0, 00:24:06.410 "enable_ktls": false 00:24:06.410 } 00:24:06.410 } 00:24:06.410 ] 00:24:06.410 }, 00:24:06.410 { 00:24:06.410 "subsystem": "vmd", 00:24:06.410 "config": [] 00:24:06.410 }, 00:24:06.410 { 00:24:06.410 "subsystem": "accel", 00:24:06.410 "config": [ 00:24:06.410 { 00:24:06.410 "method": "accel_set_options", 00:24:06.410 "params": { 00:24:06.410 "small_cache_size": 128, 00:24:06.410 "large_cache_size": 16, 00:24:06.410 "task_count": 2048, 00:24:06.410 "sequence_count": 2048, 00:24:06.410 "buf_count": 2048 00:24:06.410 } 00:24:06.410 } 00:24:06.410 ] 00:24:06.410 }, 00:24:06.410 { 00:24:06.410 "subsystem": "bdev", 00:24:06.410 "config": [ 00:24:06.410 { 00:24:06.410 "method": "bdev_set_options", 00:24:06.410 "params": { 00:24:06.410 "bdev_io_pool_size": 65535, 00:24:06.410 "bdev_io_cache_size": 256, 00:24:06.410 "bdev_auto_examine": true, 00:24:06.410 "iobuf_small_cache_size": 128, 00:24:06.410 "iobuf_large_cache_size": 16 00:24:06.410 } 00:24:06.410 }, 00:24:06.410 { 00:24:06.410 "method": "bdev_raid_set_options", 00:24:06.410 "params": { 00:24:06.410 "process_window_size_kb": 1024, 00:24:06.410 "process_max_bandwidth_mb_sec": 0 00:24:06.410 } 00:24:06.410 }, 00:24:06.410 { 00:24:06.410 "method": "bdev_iscsi_set_options", 00:24:06.410 "params": { 00:24:06.410 "timeout_sec": 30 00:24:06.410 } 00:24:06.410 }, 00:24:06.410 { 00:24:06.410 "method": "bdev_nvme_set_options", 00:24:06.410 "params": { 00:24:06.410 "action_on_timeout": "none", 00:24:06.410 "timeout_us": 0, 00:24:06.410 "timeout_admin_us": 0, 00:24:06.410 "keep_alive_timeout_ms": 10000, 00:24:06.410 "arbitration_burst": 0, 00:24:06.410 "low_priority_weight": 0, 00:24:06.410 "medium_priority_weight": 0, 00:24:06.410 "high_priority_weight": 0, 00:24:06.410 "nvme_adminq_poll_period_us": 10000, 00:24:06.410 "nvme_ioq_poll_period_us": 0, 00:24:06.410 "io_queue_requests": 0, 00:24:06.410 "delay_cmd_submit": true, 00:24:06.410 "transport_retry_count": 4, 00:24:06.410 "bdev_retry_count": 3, 00:24:06.410 "transport_ack_timeout": 0, 00:24:06.410 "ctrlr_loss_timeout_sec": 0, 00:24:06.410 "reconnect_delay_sec": 0, 00:24:06.410 "fast_io_fail_timeout_sec": 0, 00:24:06.410 "disable_auto_failback": false, 00:24:06.410 "generate_uuids": false, 00:24:06.410 "transport_tos": 0, 00:24:06.410 "nvme_error_stat": false, 00:24:06.410 "rdma_srq_size": 0, 00:24:06.410 "io_path_stat": false, 00:24:06.410 "allow_accel_sequence": false, 00:24:06.410 "rdma_max_cq_size": 0, 00:24:06.410 "rdma_cm_event_timeout_ms": 0, 00:24:06.410 "dhchap_digests": [ 00:24:06.410 "sha256", 00:24:06.410 "sha384", 00:24:06.410 "sha512" 00:24:06.410 ], 00:24:06.410 "dhchap_dhgroups": [ 00:24:06.410 "null", 00:24:06.410 "ffdhe2048", 00:24:06.410 "ffdhe3072", 00:24:06.410 "ffdhe4096", 00:24:06.410 "ffdhe6144", 00:24:06.410 "ffdhe8192" 00:24:06.410 ] 00:24:06.410 } 00:24:06.410 }, 00:24:06.410 { 00:24:06.410 "method": "bdev_nvme_set_hotplug", 00:24:06.410 "params": { 00:24:06.410 "period_us": 100000, 00:24:06.410 "enable": false 00:24:06.410 } 00:24:06.410 }, 00:24:06.410 { 00:24:06.410 "method": "bdev_malloc_create", 00:24:06.410 "params": { 00:24:06.410 "name": "malloc0", 00:24:06.410 "num_blocks": 8192, 00:24:06.410 "block_size": 4096, 00:24:06.410 "physical_block_size": 4096, 00:24:06.410 "uuid": "1365a18a-8ae9-488b-94c7-5c20c3b657dc", 00:24:06.410 "optimal_io_boundary": 0, 00:24:06.410 "md_size": 0, 00:24:06.410 "dif_type": 0, 00:24:06.410 "dif_is_head_of_md": false, 00:24:06.410 "dif_pi_format": 0 00:24:06.410 } 00:24:06.410 }, 00:24:06.410 { 00:24:06.410 "method": "bdev_wait_for_examine" 00:24:06.410 } 00:24:06.410 ] 00:24:06.410 }, 00:24:06.410 { 00:24:06.410 "subsystem": "nbd", 00:24:06.410 "config": [] 00:24:06.410 }, 00:24:06.410 { 00:24:06.410 "subsystem": "scheduler", 00:24:06.410 "config": [ 00:24:06.410 { 00:24:06.410 "method": "framework_set_scheduler", 00:24:06.410 "params": { 00:24:06.410 "name": "static" 00:24:06.410 } 00:24:06.410 } 00:24:06.410 ] 00:24:06.410 }, 00:24:06.410 { 00:24:06.410 "subsystem": "nvmf", 00:24:06.410 "config": [ 00:24:06.410 { 00:24:06.410 "method": "nvmf_set_config", 00:24:06.410 "params": { 00:24:06.410 "discovery_filter": "match_any", 00:24:06.410 "admin_cmd_passthru": { 00:24:06.410 "identify_ctrlr": false 00:24:06.410 }, 00:24:06.410 "dhchap_digests": [ 00:24:06.410 "sha256", 00:24:06.410 "sha384", 00:24:06.410 "sha512" 00:24:06.410 ], 00:24:06.410 "dhchap_dhgroups": [ 00:24:06.410 "null", 00:24:06.410 "ffdhe2048", 00:24:06.410 "ffdhe3072", 00:24:06.410 "ffdhe4096", 00:24:06.410 "ffdhe6144", 00:24:06.410 "ffdhe8192" 00:24:06.410 ] 00:24:06.410 } 00:24:06.410 }, 00:24:06.410 { 00:24:06.410 "method": "nvmf_set_max_subsystems", 00:24:06.410 "params": { 00:24:06.410 "max_subsystems": 1024 00:24:06.411 } 00:24:06.411 }, 00:24:06.411 { 00:24:06.411 "method": "nvmf_set_crdt", 00:24:06.411 "params": { 00:24:06.411 "crdt1": 0, 00:24:06.411 "crdt2": 0, 00:24:06.411 "crdt3": 0 00:24:06.411 } 00:24:06.411 }, 00:24:06.411 { 00:24:06.411 "method": "nvmf_create_transport", 00:24:06.411 "params": { 00:24:06.411 "trtype": "TCP", 00:24:06.411 "max_queue_depth": 128, 00:24:06.411 "max_io_qpairs_per_ctrlr": 127, 00:24:06.411 "in_capsule_data_size": 4096, 00:24:06.411 "max_io_size": 131072, 00:24:06.411 "io_unit_size": 131072, 00:24:06.411 "max_aq_depth": 128, 00:24:06.411 "num_shared_buffers": 511, 00:24:06.411 "buf_cache_size": 4294967295, 00:24:06.411 "dif_insert_or_strip": false, 00:24:06.411 "zcopy": false, 00:24:06.411 "c2h_success": false, 00:24:06.411 "sock_priority": 0, 00:24:06.411 "abort_timeout_sec": 1, 00:24:06.411 "ack_timeout": 0, 00:24:06.411 "data_wr_pool_size": 0 00:24:06.411 } 00:24:06.411 }, 00:24:06.411 { 00:24:06.411 "method": "nvmf_create_subsystem", 00:24:06.411 "params": { 00:24:06.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.411 "allow_any_host": false, 00:24:06.411 "serial_number": "SPDK00000000000001", 00:24:06.411 "model_number": "SPDK bdev Controller", 00:24:06.411 "max_namespaces": 10, 00:24:06.411 "min_cntlid": 1, 00:24:06.411 "max_cntlid": 65519, 00:24:06.411 "ana_reporting": false 00:24:06.411 } 00:24:06.411 }, 00:24:06.411 { 00:24:06.411 "method": "nvmf_subsystem_add_host", 00:24:06.411 "params": { 00:24:06.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.411 "host": "nqn.2016-06.io.spdk:host1", 00:24:06.411 "psk": "key0" 00:24:06.411 } 00:24:06.411 }, 00:24:06.411 { 00:24:06.411 "method": "nvmf_subsystem_add_ns", 00:24:06.411 "params": { 00:24:06.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.411 "namespace": { 00:24:06.411 "nsid": 1, 00:24:06.411 "bdev_name": "malloc0", 00:24:06.411 "nguid": "1365A18A8AE9488B94C75C20C3B657DC", 00:24:06.411 "uuid": "1365a18a-8ae9-488b-94c7-5c20c3b657dc", 00:24:06.411 "no_auto_visible": false 00:24:06.411 } 00:24:06.411 } 00:24:06.411 }, 00:24:06.411 { 00:24:06.411 "method": "nvmf_subsystem_add_listener", 00:24:06.411 "params": { 00:24:06.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.411 "listen_address": { 00:24:06.411 "trtype": "TCP", 00:24:06.411 "adrfam": "IPv4", 00:24:06.411 "traddr": "10.0.0.2", 00:24:06.411 "trsvcid": "4420" 00:24:06.411 }, 00:24:06.411 "secure_channel": true 00:24:06.411 } 00:24:06.411 } 00:24:06.411 ] 00:24:06.411 } 00:24:06.411 ] 00:24:06.411 }' 00:24:06.411 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:06.411 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:06.411 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:06.411 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.411 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1616099 00:24:06.411 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:06.411 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1616099 00:24:06.411 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1616099 ']' 00:24:06.411 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.411 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.411 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.411 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.411 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.670 [2024-11-20 14:43:18.384149] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:24:06.670 [2024-11-20 14:43:18.384193] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.670 [2024-11-20 14:43:18.462633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.671 [2024-11-20 14:43:18.503449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.671 [2024-11-20 14:43:18.503487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.671 [2024-11-20 14:43:18.503494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.671 [2024-11-20 14:43:18.503500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.671 [2024-11-20 14:43:18.503505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.671 [2024-11-20 14:43:18.504083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.930 [2024-11-20 14:43:18.718196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.930 [2024-11-20 14:43:18.750219] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:06.930 [2024-11-20 14:43:18.750415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.501 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.501 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:07.501 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:07.501 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:07.501 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.501 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.501 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1616337 00:24:07.501 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1616337 /var/tmp/bdevperf.sock 00:24:07.501 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1616337 ']' 00:24:07.501 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.501 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:07.501 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:07.501 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.501 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:07.501 "subsystems": [ 00:24:07.501 { 00:24:07.501 "subsystem": "keyring", 00:24:07.501 "config": [ 00:24:07.501 { 00:24:07.501 "method": "keyring_file_add_key", 00:24:07.501 "params": { 00:24:07.501 "name": "key0", 00:24:07.501 "path": "/tmp/tmp.oqKdPWxmod" 00:24:07.501 } 00:24:07.501 } 00:24:07.501 ] 00:24:07.501 }, 00:24:07.501 { 00:24:07.501 "subsystem": "iobuf", 00:24:07.501 "config": [ 00:24:07.501 { 00:24:07.501 "method": "iobuf_set_options", 00:24:07.501 "params": { 00:24:07.501 "small_pool_count": 8192, 00:24:07.501 "large_pool_count": 1024, 00:24:07.501 "small_bufsize": 8192, 00:24:07.501 "large_bufsize": 135168, 00:24:07.501 "enable_numa": false 00:24:07.501 } 00:24:07.501 } 00:24:07.501 ] 00:24:07.501 }, 00:24:07.501 { 00:24:07.501 "subsystem": "sock", 00:24:07.501 "config": [ 00:24:07.501 { 00:24:07.501 "method": "sock_set_default_impl", 00:24:07.501 "params": { 00:24:07.501 "impl_name": "posix" 00:24:07.501 } 00:24:07.501 }, 00:24:07.501 { 00:24:07.501 "method": "sock_impl_set_options", 00:24:07.501 "params": { 00:24:07.501 "impl_name": "ssl", 00:24:07.501 "recv_buf_size": 4096, 00:24:07.501 "send_buf_size": 4096, 00:24:07.501 "enable_recv_pipe": true, 00:24:07.501 "enable_quickack": false, 00:24:07.501 "enable_placement_id": 0, 00:24:07.501 "enable_zerocopy_send_server": true, 00:24:07.501 "enable_zerocopy_send_client": false, 00:24:07.501 "zerocopy_threshold": 0, 00:24:07.501 "tls_version": 0, 00:24:07.501 "enable_ktls": false 00:24:07.501 } 00:24:07.501 }, 00:24:07.501 { 00:24:07.501 "method": "sock_impl_set_options", 00:24:07.501 "params": { 00:24:07.501 "impl_name": "posix", 00:24:07.501 "recv_buf_size": 2097152, 00:24:07.501 "send_buf_size": 2097152, 00:24:07.501 "enable_recv_pipe": true, 00:24:07.501 "enable_quickack": false, 00:24:07.501 "enable_placement_id": 0, 00:24:07.501 "enable_zerocopy_send_server": true, 00:24:07.501 "enable_zerocopy_send_client": false, 00:24:07.501 "zerocopy_threshold": 0, 00:24:07.501 "tls_version": 0, 00:24:07.501 "enable_ktls": false 00:24:07.501 } 00:24:07.501 } 00:24:07.501 ] 00:24:07.501 }, 00:24:07.501 { 00:24:07.502 "subsystem": "vmd", 00:24:07.502 "config": [] 00:24:07.502 }, 00:24:07.502 { 00:24:07.502 "subsystem": "accel", 00:24:07.502 "config": [ 00:24:07.502 { 00:24:07.502 "method": "accel_set_options", 00:24:07.502 "params": { 00:24:07.502 "small_cache_size": 128, 00:24:07.502 "large_cache_size": 16, 00:24:07.502 "task_count": 2048, 00:24:07.502 "sequence_count": 2048, 00:24:07.502 "buf_count": 2048 00:24:07.502 } 00:24:07.502 } 00:24:07.502 ] 00:24:07.502 }, 00:24:07.502 { 00:24:07.502 "subsystem": "bdev", 00:24:07.502 "config": [ 00:24:07.502 { 00:24:07.502 "method": "bdev_set_options", 00:24:07.502 "params": { 00:24:07.502 "bdev_io_pool_size": 65535, 00:24:07.502 "bdev_io_cache_size": 256, 00:24:07.502 "bdev_auto_examine": true, 00:24:07.502 "iobuf_small_cache_size": 128, 00:24:07.502 "iobuf_large_cache_size": 16 00:24:07.502 } 00:24:07.502 }, 00:24:07.502 { 00:24:07.502 "method": "bdev_raid_set_options", 00:24:07.502 "params": { 00:24:07.502 "process_window_size_kb": 1024, 00:24:07.502 "process_max_bandwidth_mb_sec": 0 00:24:07.502 } 00:24:07.502 }, 00:24:07.502 { 00:24:07.502 "method": "bdev_iscsi_set_options", 00:24:07.502 "params": { 00:24:07.502 "timeout_sec": 30 00:24:07.502 } 00:24:07.502 }, 00:24:07.502 { 00:24:07.502 "method": "bdev_nvme_set_options", 00:24:07.502 "params": { 00:24:07.502 "action_on_timeout": "none", 00:24:07.502 "timeout_us": 0, 00:24:07.502 "timeout_admin_us": 0, 00:24:07.502 "keep_alive_timeout_ms": 10000, 00:24:07.502 "arbitration_burst": 0, 00:24:07.502 "low_priority_weight": 0, 00:24:07.502 "medium_priority_weight": 0, 00:24:07.502 "high_priority_weight": 0, 00:24:07.502 "nvme_adminq_poll_period_us": 10000, 00:24:07.502 "nvme_ioq_poll_period_us": 0, 00:24:07.502 "io_queue_requests": 512, 00:24:07.502 "delay_cmd_submit": true, 00:24:07.502 "transport_retry_count": 4, 00:24:07.502 "bdev_retry_count": 3, 00:24:07.502 "transport_ack_timeout": 0, 00:24:07.502 "ctrlr_loss_timeout_sec": 0, 00:24:07.502 "reconnect_delay_sec": 0, 00:24:07.502 "fast_io_fail_timeout_sec": 0, 00:24:07.502 "disable_auto_failback": false, 00:24:07.502 "generate_uuids": false, 00:24:07.502 "transport_tos": 0, 00:24:07.502 "nvme_error_stat": false, 00:24:07.502 "rdma_srq_size": 0, 00:24:07.502 "io_path_stat": false, 00:24:07.502 "allow_accel_sequence": false, 00:24:07.502 "rdma_max_cq_size": 0, 00:24:07.502 "rdma_cm_event_timeout_ms": 0, 00:24:07.502 "dhchap_digests": [ 00:24:07.502 "sha256", 00:24:07.502 "sha384", 00:24:07.502 "sha512" 00:24:07.502 ], 00:24:07.502 "dhchap_dhgroups": [ 00:24:07.502 "null", 00:24:07.502 "ffdhe2048", 00:24:07.502 "ffdhe3072", 00:24:07.502 "ffdhe4096", 00:24:07.502 "ffdhe6144", 00:24:07.502 "ffdhe8192" 00:24:07.502 ] 00:24:07.502 } 00:24:07.502 }, 00:24:07.502 { 00:24:07.502 "method": "bdev_nvme_attach_controller", 00:24:07.502 "params": { 00:24:07.502 "name": "TLSTEST", 00:24:07.502 "trtype": "TCP", 00:24:07.502 "adrfam": "IPv4", 00:24:07.502 "traddr": "10.0.0.2", 00:24:07.502 "trsvcid": "4420", 00:24:07.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.502 "prchk_reftag": false, 00:24:07.503 "prchk_guard": false, 00:24:07.503 "ctrlr_loss_timeout_sec": 0, 00:24:07.503 "reconnect_delay_sec": 0, 00:24:07.503 "fast_io_fail_timeout_sec": 0, 00:24:07.503 "psk": "key0", 00:24:07.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:07.503 "hdgst": false, 00:24:07.503 "ddgst": false, 00:24:07.503 "multipath": "multipath" 00:24:07.503 } 00:24:07.503 }, 00:24:07.503 { 00:24:07.503 "method": "bdev_nvme_set_hotplug", 00:24:07.503 "params": { 00:24:07.503 "period_us": 100000, 00:24:07.503 "enable": false 00:24:07.503 } 00:24:07.503 }, 00:24:07.503 { 00:24:07.503 "method": "bdev_wait_for_examine" 00:24:07.503 } 00:24:07.503 ] 00:24:07.503 }, 00:24:07.503 { 00:24:07.503 "subsystem": "nbd", 00:24:07.503 "config": [] 00:24:07.503 } 00:24:07.503 ] 00:24:07.503 }' 00:24:07.503 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:07.503 14:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.503 [2024-11-20 14:43:19.302563] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:24:07.503 [2024-11-20 14:43:19.302612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616337 ] 00:24:07.503 [2024-11-20 14:43:19.376726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.503 [2024-11-20 14:43:19.416731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.762 [2024-11-20 14:43:19.569299] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:08.329 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.329 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:08.329 14:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:08.329 Running I/O for 10 seconds... 00:24:10.646 5200.00 IOPS, 20.31 MiB/s [2024-11-20T13:43:23.541Z] 5424.50 IOPS, 21.19 MiB/s [2024-11-20T13:43:24.478Z] 5448.33 IOPS, 21.28 MiB/s [2024-11-20T13:43:25.415Z] 5472.25 IOPS, 21.38 MiB/s [2024-11-20T13:43:26.354Z] 5468.60 IOPS, 21.36 MiB/s [2024-11-20T13:43:27.290Z] 5501.50 IOPS, 21.49 MiB/s [2024-11-20T13:43:28.668Z] 5506.57 IOPS, 21.51 MiB/s [2024-11-20T13:43:29.606Z] 5520.00 IOPS, 21.56 MiB/s [2024-11-20T13:43:30.544Z] 5504.89 IOPS, 21.50 MiB/s [2024-11-20T13:43:30.544Z] 5495.50 IOPS, 21.47 MiB/s 00:24:18.586 Latency(us) 00:24:18.586 [2024-11-20T13:43:30.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.586 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:18.586 Verification LBA range: start 0x0 length 0x2000 00:24:18.586 TLSTESTn1 : 10.01 5500.75 21.49 0.00 0.00 23234.73 5100.41 23706.94 00:24:18.586 [2024-11-20T13:43:30.544Z] =================================================================================================================== 00:24:18.586 [2024-11-20T13:43:30.544Z] Total : 5500.75 21.49 0.00 0.00 23234.73 5100.41 23706.94 00:24:18.586 { 00:24:18.586 "results": [ 00:24:18.586 { 00:24:18.586 "job": "TLSTESTn1", 00:24:18.586 "core_mask": "0x4", 00:24:18.586 "workload": "verify", 00:24:18.586 "status": "finished", 00:24:18.586 "verify_range": { 00:24:18.586 "start": 0, 00:24:18.586 "length": 8192 00:24:18.586 }, 00:24:18.586 "queue_depth": 128, 00:24:18.586 "io_size": 4096, 00:24:18.586 "runtime": 10.013366, 00:24:18.586 "iops": 5500.747700623347, 00:24:18.586 "mibps": 21.487295705559948, 00:24:18.586 "io_failed": 0, 00:24:18.586 "io_timeout": 0, 00:24:18.586 "avg_latency_us": 23234.733662345494, 00:24:18.586 "min_latency_us": 5100.410434782609, 00:24:18.586 "max_latency_us": 23706.935652173914 00:24:18.586 } 00:24:18.586 ], 00:24:18.586 "core_count": 1 00:24:18.586 } 00:24:18.586 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:18.586 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1616337 00:24:18.586 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1616337 ']' 00:24:18.586 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1616337 00:24:18.586 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:18.586 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.586 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1616337 00:24:18.586 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:18.586 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:18.586 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1616337' 00:24:18.586 killing process with pid 1616337 00:24:18.586 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1616337 00:24:18.586 Received shutdown signal, test time was about 10.000000 seconds 00:24:18.586 00:24:18.586 Latency(us) 00:24:18.586 [2024-11-20T13:43:30.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.586 [2024-11-20T13:43:30.544Z] =================================================================================================================== 00:24:18.586 [2024-11-20T13:43:30.544Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:18.586 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1616337 00:24:18.586 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1616099 00:24:18.586 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1616099 ']' 00:24:18.586 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1616099 00:24:18.586 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:18.586 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.586 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1616099 00:24:18.846 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:18.846 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:18.846 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1616099' 00:24:18.846 killing process with pid 1616099 00:24:18.846 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1616099 00:24:18.846 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1616099 00:24:18.846 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:18.846 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:18.846 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:18.846 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.846 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1618180 00:24:18.846 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1618180 00:24:18.846 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:18.846 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1618180 ']' 00:24:18.846 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.846 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:18.846 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.846 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:18.846 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.846 [2024-11-20 14:43:30.790079] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:24:18.846 [2024-11-20 14:43:30.790129] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.106 [2024-11-20 14:43:30.869936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.106 [2024-11-20 14:43:30.908111] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.106 [2024-11-20 14:43:30.908146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.106 [2024-11-20 14:43:30.908153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.106 [2024-11-20 14:43:30.908160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.106 [2024-11-20 14:43:30.908165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.106 [2024-11-20 14:43:30.908748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.674 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.674 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:19.674 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:19.674 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:19.674 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.934 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.934 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.oqKdPWxmod 00:24:19.934 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oqKdPWxmod 00:24:19.934 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:19.934 [2024-11-20 14:43:31.836245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.934 14:43:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:20.193 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:20.453 [2024-11-20 14:43:32.221250] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:20.453 [2024-11-20 14:43:32.221458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.453 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:20.712 malloc0 00:24:20.712 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:20.712 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oqKdPWxmod 00:24:20.970 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:21.230 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1618449 00:24:21.230 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:21.230 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:21.230 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1618449 /var/tmp/bdevperf.sock 00:24:21.230 14:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1618449 ']' 00:24:21.230 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.230 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.230 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.230 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.230 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.230 [2024-11-20 14:43:33.043409] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:24:21.230 [2024-11-20 14:43:33.043458] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1618449 ] 00:24:21.230 [2024-11-20 14:43:33.116508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.230 [2024-11-20 14:43:33.160295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.489 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.489 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:21.489 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oqKdPWxmod 00:24:21.489 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:21.748 [2024-11-20 14:43:33.616944] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:21.748 nvme0n1 00:24:21.748 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:22.007 Running I/O for 1 seconds... 00:24:22.945 5339.00 IOPS, 20.86 MiB/s 00:24:22.945 Latency(us) 00:24:22.945 [2024-11-20T13:43:34.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.945 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:22.945 Verification LBA range: start 0x0 length 0x2000 00:24:22.945 nvme0n1 : 1.01 5392.36 21.06 0.00 0.00 23552.30 5413.84 21769.35 00:24:22.945 [2024-11-20T13:43:34.903Z] =================================================================================================================== 00:24:22.945 [2024-11-20T13:43:34.903Z] Total : 5392.36 21.06 0.00 0.00 23552.30 5413.84 21769.35 00:24:22.945 { 00:24:22.945 "results": [ 00:24:22.945 { 00:24:22.945 "job": "nvme0n1", 00:24:22.945 "core_mask": "0x2", 00:24:22.945 "workload": "verify", 00:24:22.945 "status": "finished", 00:24:22.945 "verify_range": { 00:24:22.945 "start": 0, 00:24:22.945 "length": 8192 00:24:22.945 }, 00:24:22.945 "queue_depth": 128, 00:24:22.945 "io_size": 4096, 00:24:22.945 "runtime": 1.013842, 00:24:22.945 "iops": 5392.358967176345, 00:24:22.945 "mibps": 21.063902215532597, 00:24:22.946 "io_failed": 0, 00:24:22.946 "io_timeout": 0, 00:24:22.946 "avg_latency_us": 23552.303923779833, 00:24:22.946 "min_latency_us": 5413.843478260869, 00:24:22.946 "max_latency_us": 21769.34956521739 00:24:22.946 } 00:24:22.946 ], 00:24:22.946 "core_count": 1 00:24:22.946 } 00:24:22.946 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1618449 00:24:22.946 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1618449 ']' 00:24:22.946 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1618449 00:24:22.946 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:22.946 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.946 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1618449 00:24:22.946 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:22.946 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:22.946 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1618449' 00:24:22.946 killing process with pid 1618449 00:24:22.946 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1618449 00:24:22.946 Received shutdown signal, test time was about 1.000000 seconds 00:24:22.946 00:24:22.946 Latency(us) 00:24:22.946 [2024-11-20T13:43:34.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.946 [2024-11-20T13:43:34.904Z] =================================================================================================================== 00:24:22.946 [2024-11-20T13:43:34.904Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:22.946 14:43:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1618449 00:24:23.206 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1618180 00:24:23.206 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1618180 ']' 00:24:23.206 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1618180 00:24:23.206 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:23.206 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.206 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1618180 00:24:23.206 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:23.206 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:23.206 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1618180' 00:24:23.206 killing process with pid 1618180 00:24:23.206 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1618180 00:24:23.206 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1618180 00:24:23.465 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:23.465 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:23.465 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:23.465 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.465 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1618914 00:24:23.465 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1618914 00:24:23.465 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:23.465 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1618914 ']' 00:24:23.465 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.465 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.465 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.465 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.465 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.465 [2024-11-20 14:43:35.318308] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:24:23.465 [2024-11-20 14:43:35.318353] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.465 [2024-11-20 14:43:35.397618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.724 [2024-11-20 14:43:35.439429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.724 [2024-11-20 14:43:35.439460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.724 [2024-11-20 14:43:35.439467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.724 [2024-11-20 14:43:35.439473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.724 [2024-11-20 14:43:35.439478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.724 [2024-11-20 14:43:35.440063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.724 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.724 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:23.724 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:23.724 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:23.724 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.725 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.725 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:23.725 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.725 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.725 [2024-11-20 14:43:35.572830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.725 malloc0 00:24:23.725 [2024-11-20 14:43:35.601076] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:23.725 [2024-11-20 14:43:35.601271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.725 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.725 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1618936 00:24:23.725 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:23.725 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1618936 /var/tmp/bdevperf.sock 00:24:23.725 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1618936 ']' 00:24:23.725 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.725 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.725 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.725 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.725 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.725 [2024-11-20 14:43:35.675306] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:24:23.725 [2024-11-20 14:43:35.675346] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1618936 ] 00:24:23.984 [2024-11-20 14:43:35.751241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.984 [2024-11-20 14:43:35.791875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.984 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.984 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:23.984 14:43:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oqKdPWxmod 00:24:24.246 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:24.543 [2024-11-20 14:43:36.253393] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:24.543 nvme0n1 00:24:24.543 14:43:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:24.543 Running I/O for 1 seconds... 00:24:25.559 5363.00 IOPS, 20.95 MiB/s 00:24:25.559 Latency(us) 00:24:25.559 [2024-11-20T13:43:37.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.559 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:25.559 Verification LBA range: start 0x0 length 0x2000 00:24:25.559 nvme0n1 : 1.02 5407.42 21.12 0.00 0.00 23494.54 5641.79 21655.37 00:24:25.559 [2024-11-20T13:43:37.517Z] =================================================================================================================== 00:24:25.559 [2024-11-20T13:43:37.517Z] Total : 5407.42 21.12 0.00 0.00 23494.54 5641.79 21655.37 00:24:25.559 { 00:24:25.559 "results": [ 00:24:25.559 { 00:24:25.559 "job": "nvme0n1", 00:24:25.559 "core_mask": "0x2", 00:24:25.559 "workload": "verify", 00:24:25.559 "status": "finished", 00:24:25.559 "verify_range": { 00:24:25.559 "start": 0, 00:24:25.559 "length": 8192 00:24:25.559 }, 00:24:25.559 "queue_depth": 128, 00:24:25.559 "io_size": 4096, 00:24:25.559 "runtime": 1.015456, 00:24:25.559 "iops": 5407.422872088992, 00:24:25.559 "mibps": 21.122745594097626, 00:24:25.559 "io_failed": 0, 00:24:25.559 "io_timeout": 0, 00:24:25.559 "avg_latency_us": 23494.538983474937, 00:24:25.559 "min_latency_us": 5641.794782608696, 00:24:25.559 "max_latency_us": 21655.373913043477 00:24:25.559 } 00:24:25.559 ], 00:24:25.559 "core_count": 1 00:24:25.559 } 00:24:25.559 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:25.559 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.559 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.819 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.819 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:25.819 "subsystems": [ 00:24:25.819 { 00:24:25.819 "subsystem": "keyring", 00:24:25.819 "config": [ 00:24:25.819 { 00:24:25.819 "method": "keyring_file_add_key", 00:24:25.819 "params": { 00:24:25.819 "name": "key0", 00:24:25.819 "path": "/tmp/tmp.oqKdPWxmod" 00:24:25.819 } 00:24:25.819 } 00:24:25.819 ] 00:24:25.819 }, 00:24:25.819 { 00:24:25.819 "subsystem": "iobuf", 00:24:25.819 "config": [ 00:24:25.819 { 00:24:25.819 "method": "iobuf_set_options", 00:24:25.819 "params": { 00:24:25.819 "small_pool_count": 8192, 00:24:25.819 "large_pool_count": 1024, 00:24:25.819 "small_bufsize": 8192, 00:24:25.819 "large_bufsize": 135168, 00:24:25.819 "enable_numa": false 00:24:25.819 } 00:24:25.819 } 00:24:25.819 ] 00:24:25.819 }, 00:24:25.819 { 00:24:25.819 "subsystem": "sock", 00:24:25.819 "config": [ 00:24:25.819 { 00:24:25.819 "method": "sock_set_default_impl", 00:24:25.819 "params": { 00:24:25.819 "impl_name": "posix" 00:24:25.819 } 00:24:25.819 }, 00:24:25.819 { 00:24:25.819 "method": "sock_impl_set_options", 00:24:25.819 "params": { 00:24:25.819 "impl_name": "ssl", 00:24:25.819 "recv_buf_size": 4096, 00:24:25.819 "send_buf_size": 4096, 00:24:25.819 "enable_recv_pipe": true, 00:24:25.819 "enable_quickack": false, 00:24:25.819 "enable_placement_id": 0, 00:24:25.819 "enable_zerocopy_send_server": true, 00:24:25.819 "enable_zerocopy_send_client": false, 00:24:25.819 "zerocopy_threshold": 0, 00:24:25.819 "tls_version": 0, 00:24:25.819 "enable_ktls": false 00:24:25.819 } 00:24:25.819 }, 00:24:25.819 { 00:24:25.819 "method": "sock_impl_set_options", 00:24:25.819 "params": { 00:24:25.819 "impl_name": "posix", 00:24:25.819 "recv_buf_size": 2097152, 00:24:25.819 "send_buf_size": 2097152, 00:24:25.819 "enable_recv_pipe": true, 00:24:25.819 "enable_quickack": false, 00:24:25.819 "enable_placement_id": 0, 00:24:25.819 "enable_zerocopy_send_server": true, 00:24:25.819 "enable_zerocopy_send_client": false, 00:24:25.819 "zerocopy_threshold": 0, 00:24:25.819 "tls_version": 0, 00:24:25.819 "enable_ktls": false 00:24:25.819 } 00:24:25.819 } 00:24:25.819 ] 00:24:25.819 }, 00:24:25.819 { 00:24:25.819 "subsystem": "vmd", 00:24:25.819 "config": [] 00:24:25.819 }, 00:24:25.819 { 00:24:25.819 "subsystem": "accel", 00:24:25.819 "config": [ 00:24:25.819 { 00:24:25.819 "method": "accel_set_options", 00:24:25.819 "params": { 00:24:25.819 "small_cache_size": 128, 00:24:25.819 "large_cache_size": 16, 00:24:25.819 "task_count": 2048, 00:24:25.819 "sequence_count": 2048, 00:24:25.819 "buf_count": 2048 00:24:25.819 } 00:24:25.819 } 00:24:25.819 ] 00:24:25.819 }, 00:24:25.819 { 00:24:25.819 "subsystem": "bdev", 00:24:25.819 "config": [ 00:24:25.819 { 00:24:25.819 "method": "bdev_set_options", 00:24:25.819 "params": { 00:24:25.819 "bdev_io_pool_size": 65535, 00:24:25.819 "bdev_io_cache_size": 256, 00:24:25.819 "bdev_auto_examine": true, 00:24:25.819 "iobuf_small_cache_size": 128, 00:24:25.819 "iobuf_large_cache_size": 16 00:24:25.819 } 00:24:25.819 }, 00:24:25.819 { 00:24:25.819 "method": "bdev_raid_set_options", 00:24:25.819 "params": { 00:24:25.819 "process_window_size_kb": 1024, 00:24:25.819 "process_max_bandwidth_mb_sec": 0 00:24:25.819 } 00:24:25.819 }, 00:24:25.819 { 00:24:25.819 "method": "bdev_iscsi_set_options", 00:24:25.819 "params": { 00:24:25.819 "timeout_sec": 30 00:24:25.819 } 00:24:25.819 }, 00:24:25.819 { 00:24:25.819 "method": "bdev_nvme_set_options", 00:24:25.819 "params": { 00:24:25.819 "action_on_timeout": "none", 00:24:25.819 "timeout_us": 0, 00:24:25.819 "timeout_admin_us": 0, 00:24:25.819 "keep_alive_timeout_ms": 10000, 00:24:25.819 "arbitration_burst": 0, 00:24:25.819 "low_priority_weight": 0, 00:24:25.819 "medium_priority_weight": 0, 00:24:25.819 "high_priority_weight": 0, 00:24:25.819 "nvme_adminq_poll_period_us": 10000, 00:24:25.819 "nvme_ioq_poll_period_us": 0, 00:24:25.819 "io_queue_requests": 0, 00:24:25.819 "delay_cmd_submit": true, 00:24:25.819 "transport_retry_count": 4, 00:24:25.819 "bdev_retry_count": 3, 00:24:25.819 "transport_ack_timeout": 0, 00:24:25.819 "ctrlr_loss_timeout_sec": 0, 00:24:25.819 "reconnect_delay_sec": 0, 00:24:25.819 "fast_io_fail_timeout_sec": 0, 00:24:25.819 "disable_auto_failback": false, 00:24:25.819 "generate_uuids": false, 00:24:25.819 "transport_tos": 0, 00:24:25.819 "nvme_error_stat": false, 00:24:25.819 "rdma_srq_size": 0, 00:24:25.819 "io_path_stat": false, 00:24:25.819 "allow_accel_sequence": false, 00:24:25.819 "rdma_max_cq_size": 0, 00:24:25.819 "rdma_cm_event_timeout_ms": 0, 00:24:25.819 "dhchap_digests": [ 00:24:25.819 "sha256", 00:24:25.819 "sha384", 00:24:25.819 "sha512" 00:24:25.819 ], 00:24:25.819 "dhchap_dhgroups": [ 00:24:25.819 "null", 00:24:25.819 "ffdhe2048", 00:24:25.819 "ffdhe3072", 00:24:25.819 "ffdhe4096", 00:24:25.819 "ffdhe6144", 00:24:25.819 "ffdhe8192" 00:24:25.819 ] 00:24:25.819 } 00:24:25.819 }, 00:24:25.819 { 00:24:25.819 "method": "bdev_nvme_set_hotplug", 00:24:25.819 "params": { 00:24:25.819 "period_us": 100000, 00:24:25.819 "enable": false 00:24:25.819 } 00:24:25.819 }, 00:24:25.819 { 00:24:25.819 "method": "bdev_malloc_create", 00:24:25.819 "params": { 00:24:25.819 "name": "malloc0", 00:24:25.819 "num_blocks": 8192, 00:24:25.819 "block_size": 4096, 00:24:25.819 "physical_block_size": 4096, 00:24:25.819 "uuid": "78ca78ef-0eda-4a99-a7c6-4ed5e33eab56", 00:24:25.819 "optimal_io_boundary": 0, 00:24:25.819 "md_size": 0, 00:24:25.819 "dif_type": 0, 00:24:25.819 "dif_is_head_of_md": false, 00:24:25.819 "dif_pi_format": 0 00:24:25.819 } 00:24:25.819 }, 00:24:25.819 { 00:24:25.819 "method": "bdev_wait_for_examine" 00:24:25.819 } 00:24:25.819 ] 00:24:25.819 }, 00:24:25.819 { 00:24:25.819 "subsystem": "nbd", 00:24:25.819 "config": [] 00:24:25.819 }, 00:24:25.819 { 00:24:25.819 "subsystem": "scheduler", 00:24:25.819 "config": [ 00:24:25.819 { 00:24:25.819 "method": "framework_set_scheduler", 00:24:25.819 "params": { 00:24:25.819 "name": "static" 00:24:25.819 } 00:24:25.819 } 00:24:25.819 ] 00:24:25.819 }, 00:24:25.819 { 00:24:25.819 "subsystem": "nvmf", 00:24:25.819 "config": [ 00:24:25.819 { 00:24:25.819 "method": "nvmf_set_config", 00:24:25.819 "params": { 00:24:25.819 "discovery_filter": "match_any", 00:24:25.819 "admin_cmd_passthru": { 00:24:25.819 "identify_ctrlr": false 00:24:25.819 }, 00:24:25.819 "dhchap_digests": [ 00:24:25.819 "sha256", 00:24:25.819 "sha384", 00:24:25.819 "sha512" 00:24:25.819 ], 00:24:25.819 "dhchap_dhgroups": [ 00:24:25.819 "null", 00:24:25.819 "ffdhe2048", 00:24:25.819 "ffdhe3072", 00:24:25.819 "ffdhe4096", 00:24:25.819 "ffdhe6144", 00:24:25.819 "ffdhe8192" 00:24:25.819 ] 00:24:25.819 } 00:24:25.819 }, 00:24:25.819 { 00:24:25.819 "method": "nvmf_set_max_subsystems", 00:24:25.819 "params": { 00:24:25.819 "max_subsystems": 1024 00:24:25.819 } 00:24:25.819 }, 00:24:25.819 { 00:24:25.819 "method": "nvmf_set_crdt", 00:24:25.819 "params": { 00:24:25.819 "crdt1": 0, 00:24:25.819 "crdt2": 0, 00:24:25.819 "crdt3": 0 00:24:25.819 } 00:24:25.819 }, 00:24:25.819 { 00:24:25.819 "method": "nvmf_create_transport", 00:24:25.819 "params": { 00:24:25.819 "trtype": "TCP", 00:24:25.819 "max_queue_depth": 128, 00:24:25.819 "max_io_qpairs_per_ctrlr": 127, 00:24:25.819 "in_capsule_data_size": 4096, 00:24:25.819 "max_io_size": 131072, 00:24:25.819 "io_unit_size": 131072, 00:24:25.819 "max_aq_depth": 128, 00:24:25.819 "num_shared_buffers": 511, 00:24:25.819 "buf_cache_size": 4294967295, 00:24:25.819 "dif_insert_or_strip": false, 00:24:25.819 "zcopy": false, 00:24:25.819 "c2h_success": false, 00:24:25.819 "sock_priority": 0, 00:24:25.819 "abort_timeout_sec": 1, 00:24:25.819 "ack_timeout": 0, 00:24:25.819 "data_wr_pool_size": 0 00:24:25.820 } 00:24:25.820 }, 00:24:25.820 { 00:24:25.820 "method": "nvmf_create_subsystem", 00:24:25.820 "params": { 00:24:25.820 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.820 "allow_any_host": false, 00:24:25.820 "serial_number": "00000000000000000000", 00:24:25.820 "model_number": "SPDK bdev Controller", 00:24:25.820 "max_namespaces": 32, 00:24:25.820 "min_cntlid": 1, 00:24:25.820 "max_cntlid": 65519, 00:24:25.820 "ana_reporting": false 00:24:25.820 } 00:24:25.820 }, 00:24:25.820 { 00:24:25.820 "method": "nvmf_subsystem_add_host", 00:24:25.820 "params": { 00:24:25.820 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.820 "host": "nqn.2016-06.io.spdk:host1", 00:24:25.820 "psk": "key0" 00:24:25.820 } 00:24:25.820 }, 00:24:25.820 { 00:24:25.820 "method": "nvmf_subsystem_add_ns", 00:24:25.820 "params": { 00:24:25.820 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.820 "namespace": { 00:24:25.820 "nsid": 1, 00:24:25.820 "bdev_name": "malloc0", 00:24:25.820 "nguid": "78CA78EF0EDA4A99A7C64ED5E33EAB56", 00:24:25.820 "uuid": "78ca78ef-0eda-4a99-a7c6-4ed5e33eab56", 00:24:25.820 "no_auto_visible": false 00:24:25.820 } 00:24:25.820 } 00:24:25.820 }, 00:24:25.820 { 00:24:25.820 "method": "nvmf_subsystem_add_listener", 00:24:25.820 "params": { 00:24:25.820 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.820 "listen_address": { 00:24:25.820 "trtype": "TCP", 00:24:25.820 "adrfam": "IPv4", 00:24:25.820 "traddr": "10.0.0.2", 00:24:25.820 "trsvcid": "4420" 00:24:25.820 }, 00:24:25.820 "secure_channel": false, 00:24:25.820 "sock_impl": "ssl" 00:24:25.820 } 00:24:25.820 } 00:24:25.820 ] 00:24:25.820 } 00:24:25.820 ] 00:24:25.820 }' 00:24:25.820 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:26.079 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:26.079 "subsystems": [ 00:24:26.079 { 00:24:26.079 "subsystem": "keyring", 00:24:26.079 "config": [ 00:24:26.079 { 00:24:26.079 "method": "keyring_file_add_key", 00:24:26.079 "params": { 00:24:26.079 "name": "key0", 00:24:26.079 "path": "/tmp/tmp.oqKdPWxmod" 00:24:26.079 } 00:24:26.079 } 00:24:26.079 ] 00:24:26.079 }, 00:24:26.079 { 00:24:26.079 "subsystem": "iobuf", 00:24:26.079 "config": [ 00:24:26.079 { 00:24:26.079 "method": "iobuf_set_options", 00:24:26.079 "params": { 00:24:26.079 "small_pool_count": 8192, 00:24:26.079 "large_pool_count": 1024, 00:24:26.079 "small_bufsize": 8192, 00:24:26.079 "large_bufsize": 135168, 00:24:26.079 "enable_numa": false 00:24:26.079 } 00:24:26.079 } 00:24:26.079 ] 00:24:26.079 }, 00:24:26.079 { 00:24:26.079 "subsystem": "sock", 00:24:26.079 "config": [ 00:24:26.079 { 00:24:26.079 "method": "sock_set_default_impl", 00:24:26.079 "params": { 00:24:26.079 "impl_name": "posix" 00:24:26.079 } 00:24:26.079 }, 00:24:26.079 { 00:24:26.079 "method": "sock_impl_set_options", 00:24:26.079 "params": { 00:24:26.079 "impl_name": "ssl", 00:24:26.079 "recv_buf_size": 4096, 00:24:26.079 "send_buf_size": 4096, 00:24:26.079 "enable_recv_pipe": true, 00:24:26.079 "enable_quickack": false, 00:24:26.079 "enable_placement_id": 0, 00:24:26.079 "enable_zerocopy_send_server": true, 00:24:26.079 "enable_zerocopy_send_client": false, 00:24:26.079 "zerocopy_threshold": 0, 00:24:26.079 "tls_version": 0, 00:24:26.079 "enable_ktls": false 00:24:26.079 } 00:24:26.079 }, 00:24:26.079 { 00:24:26.079 "method": "sock_impl_set_options", 00:24:26.079 "params": { 00:24:26.079 "impl_name": "posix", 00:24:26.079 "recv_buf_size": 2097152, 00:24:26.079 "send_buf_size": 2097152, 00:24:26.079 "enable_recv_pipe": true, 00:24:26.079 "enable_quickack": false, 00:24:26.079 "enable_placement_id": 0, 00:24:26.079 "enable_zerocopy_send_server": true, 00:24:26.079 "enable_zerocopy_send_client": false, 00:24:26.079 "zerocopy_threshold": 0, 00:24:26.079 "tls_version": 0, 00:24:26.079 "enable_ktls": false 00:24:26.079 } 00:24:26.079 } 00:24:26.079 ] 00:24:26.079 }, 00:24:26.079 { 00:24:26.079 "subsystem": "vmd", 00:24:26.079 "config": [] 00:24:26.079 }, 00:24:26.079 { 00:24:26.079 "subsystem": "accel", 00:24:26.079 "config": [ 00:24:26.079 { 00:24:26.079 "method": "accel_set_options", 00:24:26.079 "params": { 00:24:26.079 "small_cache_size": 128, 00:24:26.079 "large_cache_size": 16, 00:24:26.079 "task_count": 2048, 00:24:26.080 "sequence_count": 2048, 00:24:26.080 "buf_count": 2048 00:24:26.080 } 00:24:26.080 } 00:24:26.080 ] 00:24:26.080 }, 00:24:26.080 { 00:24:26.080 "subsystem": "bdev", 00:24:26.080 "config": [ 00:24:26.080 { 00:24:26.080 "method": "bdev_set_options", 00:24:26.080 "params": { 00:24:26.080 "bdev_io_pool_size": 65535, 00:24:26.080 "bdev_io_cache_size": 256, 00:24:26.080 "bdev_auto_examine": true, 00:24:26.080 "iobuf_small_cache_size": 128, 00:24:26.080 "iobuf_large_cache_size": 16 00:24:26.080 } 00:24:26.080 }, 00:24:26.080 { 00:24:26.080 "method": "bdev_raid_set_options", 00:24:26.080 "params": { 00:24:26.080 "process_window_size_kb": 1024, 00:24:26.080 "process_max_bandwidth_mb_sec": 0 00:24:26.080 } 00:24:26.080 }, 00:24:26.080 { 00:24:26.080 "method": "bdev_iscsi_set_options", 00:24:26.080 "params": { 00:24:26.080 "timeout_sec": 30 00:24:26.080 } 00:24:26.080 }, 00:24:26.080 { 00:24:26.080 "method": "bdev_nvme_set_options", 00:24:26.080 "params": { 00:24:26.080 "action_on_timeout": "none", 00:24:26.080 "timeout_us": 0, 00:24:26.080 "timeout_admin_us": 0, 00:24:26.080 "keep_alive_timeout_ms": 10000, 00:24:26.080 "arbitration_burst": 0, 00:24:26.080 "low_priority_weight": 0, 00:24:26.080 "medium_priority_weight": 0, 00:24:26.080 "high_priority_weight": 0, 00:24:26.080 "nvme_adminq_poll_period_us": 10000, 00:24:26.080 "nvme_ioq_poll_period_us": 0, 00:24:26.080 "io_queue_requests": 512, 00:24:26.080 "delay_cmd_submit": true, 00:24:26.080 "transport_retry_count": 4, 00:24:26.080 "bdev_retry_count": 3, 00:24:26.080 "transport_ack_timeout": 0, 00:24:26.080 "ctrlr_loss_timeout_sec": 0, 00:24:26.080 "reconnect_delay_sec": 0, 00:24:26.080 "fast_io_fail_timeout_sec": 0, 00:24:26.080 "disable_auto_failback": false, 00:24:26.080 "generate_uuids": false, 00:24:26.080 "transport_tos": 0, 00:24:26.080 "nvme_error_stat": false, 00:24:26.080 "rdma_srq_size": 0, 00:24:26.080 "io_path_stat": false, 00:24:26.080 "allow_accel_sequence": false, 00:24:26.080 "rdma_max_cq_size": 0, 00:24:26.080 "rdma_cm_event_timeout_ms": 0, 00:24:26.080 "dhchap_digests": [ 00:24:26.080 "sha256", 00:24:26.080 "sha384", 00:24:26.080 "sha512" 00:24:26.080 ], 00:24:26.080 "dhchap_dhgroups": [ 00:24:26.080 "null", 00:24:26.080 "ffdhe2048", 00:24:26.080 "ffdhe3072", 00:24:26.080 "ffdhe4096", 00:24:26.080 "ffdhe6144", 00:24:26.080 "ffdhe8192" 00:24:26.080 ] 00:24:26.080 } 00:24:26.080 }, 00:24:26.080 { 00:24:26.080 "method": "bdev_nvme_attach_controller", 00:24:26.080 "params": { 00:24:26.080 "name": "nvme0", 00:24:26.080 "trtype": "TCP", 00:24:26.080 "adrfam": "IPv4", 00:24:26.080 "traddr": "10.0.0.2", 00:24:26.080 "trsvcid": "4420", 00:24:26.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.080 "prchk_reftag": false, 00:24:26.080 "prchk_guard": false, 00:24:26.080 "ctrlr_loss_timeout_sec": 0, 00:24:26.080 "reconnect_delay_sec": 0, 00:24:26.080 "fast_io_fail_timeout_sec": 0, 00:24:26.080 "psk": "key0", 00:24:26.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:26.080 "hdgst": false, 00:24:26.080 "ddgst": false, 00:24:26.080 "multipath": "multipath" 00:24:26.080 } 00:24:26.080 }, 00:24:26.080 { 00:24:26.080 "method": "bdev_nvme_set_hotplug", 00:24:26.080 "params": { 00:24:26.080 "period_us": 100000, 00:24:26.080 "enable": false 00:24:26.080 } 00:24:26.080 }, 00:24:26.080 { 00:24:26.080 "method": "bdev_enable_histogram", 00:24:26.080 "params": { 00:24:26.080 "name": "nvme0n1", 00:24:26.080 "enable": true 00:24:26.080 } 00:24:26.080 }, 00:24:26.080 { 00:24:26.080 "method": "bdev_wait_for_examine" 00:24:26.080 } 00:24:26.080 ] 00:24:26.080 }, 00:24:26.080 { 00:24:26.080 "subsystem": "nbd", 00:24:26.080 "config": [] 00:24:26.080 } 00:24:26.080 ] 00:24:26.080 }' 00:24:26.080 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1618936 00:24:26.080 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1618936 ']' 00:24:26.080 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1618936 00:24:26.080 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:26.080 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.080 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1618936 00:24:26.080 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:26.080 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:26.080 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1618936' 00:24:26.080 killing process with pid 1618936 00:24:26.080 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1618936 00:24:26.080 Received shutdown signal, test time was about 1.000000 seconds 00:24:26.080 00:24:26.080 Latency(us) 00:24:26.080 [2024-11-20T13:43:38.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.080 [2024-11-20T13:43:38.038Z] =================================================================================================================== 00:24:26.080 [2024-11-20T13:43:38.038Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:26.080 14:43:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1618936 00:24:26.340 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1618914 00:24:26.340 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1618914 ']' 00:24:26.340 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1618914 00:24:26.340 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:26.340 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.340 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1618914 00:24:26.340 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:26.340 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:26.340 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1618914' 00:24:26.340 killing process with pid 1618914 00:24:26.340 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1618914 00:24:26.340 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1618914 00:24:26.340 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:26.340 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:26.340 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:26.340 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:26.340 "subsystems": [ 00:24:26.340 { 00:24:26.340 "subsystem": "keyring", 00:24:26.340 "config": [ 00:24:26.340 { 00:24:26.340 "method": "keyring_file_add_key", 00:24:26.340 "params": { 00:24:26.340 "name": "key0", 00:24:26.340 "path": "/tmp/tmp.oqKdPWxmod" 00:24:26.340 } 00:24:26.340 } 00:24:26.340 ] 00:24:26.340 }, 00:24:26.340 { 00:24:26.340 "subsystem": "iobuf", 00:24:26.340 "config": [ 00:24:26.340 { 00:24:26.340 "method": "iobuf_set_options", 00:24:26.340 "params": { 00:24:26.340 "small_pool_count": 8192, 00:24:26.340 "large_pool_count": 1024, 00:24:26.340 "small_bufsize": 8192, 00:24:26.340 "large_bufsize": 135168, 00:24:26.340 "enable_numa": false 00:24:26.340 } 00:24:26.340 } 00:24:26.340 ] 00:24:26.340 }, 00:24:26.340 { 00:24:26.340 "subsystem": "sock", 00:24:26.340 "config": [ 00:24:26.340 { 00:24:26.340 "method": "sock_set_default_impl", 00:24:26.340 "params": { 00:24:26.340 "impl_name": "posix" 00:24:26.340 } 00:24:26.340 }, 00:24:26.340 { 00:24:26.340 "method": "sock_impl_set_options", 00:24:26.340 "params": { 00:24:26.340 "impl_name": "ssl", 00:24:26.340 "recv_buf_size": 4096, 00:24:26.340 "send_buf_size": 4096, 00:24:26.340 "enable_recv_pipe": true, 00:24:26.340 "enable_quickack": false, 00:24:26.340 "enable_placement_id": 0, 00:24:26.340 "enable_zerocopy_send_server": true, 00:24:26.340 "enable_zerocopy_send_client": false, 00:24:26.340 "zerocopy_threshold": 0, 00:24:26.340 "tls_version": 0, 00:24:26.340 "enable_ktls": false 00:24:26.340 } 00:24:26.340 }, 00:24:26.340 { 00:24:26.340 "method": "sock_impl_set_options", 00:24:26.340 "params": { 00:24:26.340 "impl_name": "posix", 00:24:26.340 "recv_buf_size": 2097152, 00:24:26.340 "send_buf_size": 2097152, 00:24:26.340 "enable_recv_pipe": true, 00:24:26.340 "enable_quickack": false, 00:24:26.340 "enable_placement_id": 0, 00:24:26.340 "enable_zerocopy_send_server": true, 00:24:26.340 "enable_zerocopy_send_client": false, 00:24:26.340 "zerocopy_threshold": 0, 00:24:26.340 "tls_version": 0, 00:24:26.340 "enable_ktls": false 00:24:26.340 } 00:24:26.340 } 00:24:26.340 ] 00:24:26.340 }, 00:24:26.340 { 00:24:26.340 "subsystem": "vmd", 00:24:26.340 "config": [] 00:24:26.340 }, 00:24:26.340 { 00:24:26.340 "subsystem": "accel", 00:24:26.340 "config": [ 00:24:26.340 { 00:24:26.340 "method": "accel_set_options", 00:24:26.340 "params": { 00:24:26.340 "small_cache_size": 128, 00:24:26.340 "large_cache_size": 16, 00:24:26.340 "task_count": 2048, 00:24:26.340 "sequence_count": 2048, 00:24:26.340 "buf_count": 2048 00:24:26.340 } 00:24:26.340 } 00:24:26.340 ] 00:24:26.340 }, 00:24:26.340 { 00:24:26.340 "subsystem": "bdev", 00:24:26.340 "config": [ 00:24:26.340 { 00:24:26.340 "method": "bdev_set_options", 00:24:26.340 "params": { 00:24:26.340 "bdev_io_pool_size": 65535, 00:24:26.340 "bdev_io_cache_size": 256, 00:24:26.340 "bdev_auto_examine": true, 00:24:26.340 "iobuf_small_cache_size": 128, 00:24:26.340 "iobuf_large_cache_size": 16 00:24:26.340 } 00:24:26.340 }, 00:24:26.340 { 00:24:26.340 "method": "bdev_raid_set_options", 00:24:26.340 "params": { 00:24:26.340 "process_window_size_kb": 1024, 00:24:26.340 "process_max_bandwidth_mb_sec": 0 00:24:26.340 } 00:24:26.340 }, 00:24:26.340 { 00:24:26.340 "method": "bdev_iscsi_set_options", 00:24:26.340 "params": { 00:24:26.340 "timeout_sec": 30 00:24:26.340 } 00:24:26.340 }, 00:24:26.340 { 00:24:26.340 "method": "bdev_nvme_set_options", 00:24:26.340 "params": { 00:24:26.340 "action_on_timeout": "none", 00:24:26.340 "timeout_us": 0, 00:24:26.340 "timeout_admin_us": 0, 00:24:26.340 "keep_alive_timeout_ms": 10000, 00:24:26.340 "arbitration_burst": 0, 00:24:26.340 "low_priority_weight": 0, 00:24:26.340 "medium_priority_weight": 0, 00:24:26.340 "high_priority_weight": 0, 00:24:26.340 "nvme_adminq_poll_period_us": 10000, 00:24:26.340 "nvme_ioq_poll_period_us": 0, 00:24:26.340 "io_queue_requests": 0, 00:24:26.340 "delay_cmd_submit": true, 00:24:26.340 "transport_retry_count": 4, 00:24:26.340 "bdev_retry_count": 3, 00:24:26.340 "transport_ack_timeout": 0, 00:24:26.340 "ctrlr_loss_timeout_sec": 0, 00:24:26.340 "reconnect_delay_sec": 0, 00:24:26.340 "fast_io_fail_timeout_sec": 0, 00:24:26.340 "disable_auto_failback": false, 00:24:26.340 "generate_uuids": false, 00:24:26.340 "transport_tos": 0, 00:24:26.340 "nvme_error_stat": false, 00:24:26.340 "rdma_srq_size": 0, 00:24:26.340 "io_path_stat": false, 00:24:26.340 "allow_accel_sequence": false, 00:24:26.340 "rdma_max_cq_size": 0, 00:24:26.340 "rdma_cm_event_timeout_ms": 0, 00:24:26.340 "dhchap_digests": [ 00:24:26.340 "sha256", 00:24:26.340 "sha384", 00:24:26.340 "sha512" 00:24:26.340 ], 00:24:26.340 "dhchap_dhgroups": [ 00:24:26.340 "null", 00:24:26.340 "ffdhe2048", 00:24:26.340 "ffdhe3072", 00:24:26.340 "ffdhe4096", 00:24:26.340 "ffdhe6144", 00:24:26.340 "ffdhe8192" 00:24:26.340 ] 00:24:26.340 } 00:24:26.340 }, 00:24:26.340 { 00:24:26.340 "method": "bdev_nvme_set_hotplug", 00:24:26.340 "params": { 00:24:26.340 "period_us": 100000, 00:24:26.340 "enable": false 00:24:26.340 } 00:24:26.340 }, 00:24:26.340 { 00:24:26.340 "method": "bdev_malloc_create", 00:24:26.340 "params": { 00:24:26.340 "name": "malloc0", 00:24:26.340 "num_blocks": 8192, 00:24:26.340 "block_size": 4096, 00:24:26.340 "physical_block_size": 4096, 00:24:26.340 "uuid": "78ca78ef-0eda-4a99-a7c6-4ed5e33eab56", 00:24:26.340 "optimal_io_boundary": 0, 00:24:26.340 "md_size": 0, 00:24:26.340 "dif_type": 0, 00:24:26.340 "dif_is_head_of_md": false, 00:24:26.340 "dif_pi_format": 0 00:24:26.340 } 00:24:26.340 }, 00:24:26.340 { 00:24:26.340 "method": "bdev_wait_for_examine" 00:24:26.340 } 00:24:26.340 ] 00:24:26.340 }, 00:24:26.340 { 00:24:26.340 "subsystem": "nbd", 00:24:26.340 "config": [] 00:24:26.340 }, 00:24:26.340 { 00:24:26.340 "subsystem": "scheduler", 00:24:26.340 "config": [ 00:24:26.340 { 00:24:26.340 "method": "framework_set_scheduler", 00:24:26.340 "params": { 00:24:26.340 "name": "static" 00:24:26.340 } 00:24:26.340 } 00:24:26.340 ] 00:24:26.340 }, 00:24:26.340 { 00:24:26.340 "subsystem": "nvmf", 00:24:26.340 "config": [ 00:24:26.340 { 00:24:26.340 "method": "nvmf_set_config", 00:24:26.340 "params": { 00:24:26.340 "discovery_filter": "match_any", 00:24:26.340 "admin_cmd_passthru": { 00:24:26.340 "identify_ctrlr": false 00:24:26.340 }, 00:24:26.340 "dhchap_digests": [ 00:24:26.340 "sha256", 00:24:26.340 "sha384", 00:24:26.340 "sha512" 00:24:26.340 ], 00:24:26.340 "dhchap_dhgroups": [ 00:24:26.340 "null", 00:24:26.340 "ffdhe2048", 00:24:26.340 "ffdhe3072", 00:24:26.340 "ffdhe4096", 00:24:26.340 "ffdhe6144", 00:24:26.340 "ffdhe8192" 00:24:26.340 ] 00:24:26.340 } 00:24:26.340 }, 00:24:26.340 { 00:24:26.340 "method": "nvmf_set_max_subsystems", 00:24:26.340 "params": { 00:24:26.340 "max_subsystems": 1024 00:24:26.340 } 00:24:26.340 }, 00:24:26.340 { 00:24:26.341 "method": "nvmf_set_crdt", 00:24:26.341 "params": { 00:24:26.341 "crdt1": 0, 00:24:26.341 "crdt2": 0, 00:24:26.341 "crdt3": 0 00:24:26.341 } 00:24:26.341 }, 00:24:26.341 { 00:24:26.341 "method": "nvmf_create_transport", 00:24:26.341 "params": { 00:24:26.341 "trtype": "TCP", 00:24:26.341 "max_queue_depth": 128, 00:24:26.341 "max_io_qpairs_per_ctrlr": 127, 00:24:26.341 "in_capsule_data_size": 4096, 00:24:26.341 "max_io_size": 131072, 00:24:26.341 "io_unit_size": 131072, 00:24:26.341 "max_aq_depth": 128, 00:24:26.341 "num_shared_buffers": 511, 00:24:26.341 "buf_cache_size": 4294967295, 00:24:26.341 "dif_insert_or_strip": false, 00:24:26.341 "zcopy": false, 00:24:26.341 "c2h_success": false, 00:24:26.341 "sock_priority": 0, 00:24:26.341 "abort_timeout_sec": 1, 00:24:26.341 "ack_timeout": 0, 00:24:26.341 "data_wr_pool_size": 0 00:24:26.341 } 00:24:26.341 }, 00:24:26.341 { 00:24:26.341 "method": "nvmf_create_subsystem", 00:24:26.341 "params": { 00:24:26.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.341 "allow_any_host": false, 00:24:26.341 "serial_number": "00000000000000000000", 00:24:26.341 "model_number": "SPDK bdev Controller", 00:24:26.341 "max_namespaces": 32, 00:24:26.341 "min_cntlid": 1, 00:24:26.341 "max_cntlid": 65519, 00:24:26.341 "ana_reporting": false 00:24:26.341 } 00:24:26.341 }, 00:24:26.341 { 00:24:26.341 "method": "nvmf_subsystem_add_host", 00:24:26.341 "params": { 00:24:26.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.341 "host": "nqn.2016-06.io.spdk:host1", 00:24:26.341 "psk": "key0" 00:24:26.341 } 00:24:26.341 }, 00:24:26.341 { 00:24:26.341 "method": "nvmf_subsystem_add_ns", 00:24:26.341 "params": { 00:24:26.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.341 "namespace": { 00:24:26.341 "nsid": 1, 00:24:26.341 "bdev_name": "malloc0", 00:24:26.341 "nguid": "78CA78EF0EDA4A99A7C64ED5E33EAB56", 00:24:26.341 "uuid": "78ca78ef-0eda-4a99-a7c6-4ed5e33eab56", 00:24:26.341 "no_auto_visible": false 00:24:26.341 } 00:24:26.341 } 00:24:26.341 }, 00:24:26.341 { 00:24:26.341 "method": "nvmf_subsystem_add_listener", 00:24:26.341 "params": { 00:24:26.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.341 "listen_address": { 00:24:26.341 "trtype": "TCP", 00:24:26.341 "adrfam": "IPv4", 00:24:26.341 "traddr": "10.0.0.2", 00:24:26.341 "trsvcid": "4420" 00:24:26.341 }, 00:24:26.341 "secure_channel": false, 00:24:26.341 "sock_impl": "ssl" 00:24:26.341 } 00:24:26.341 } 00:24:26.341 ] 00:24:26.341 } 00:24:26.341 ] 00:24:26.341 }' 00:24:26.341 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.600 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1619419 00:24:26.600 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:26.600 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1619419 00:24:26.600 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1619419 ']' 00:24:26.600 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.600 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.600 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.600 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.600 14:43:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.600 [2024-11-20 14:43:38.344904] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:24:26.600 [2024-11-20 14:43:38.344957] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.600 [2024-11-20 14:43:38.421828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.600 [2024-11-20 14:43:38.462274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.600 [2024-11-20 14:43:38.462312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.600 [2024-11-20 14:43:38.462319] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.600 [2024-11-20 14:43:38.462325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.600 [2024-11-20 14:43:38.462330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.600 [2024-11-20 14:43:38.462918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.859 [2024-11-20 14:43:38.676522] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.859 [2024-11-20 14:43:38.708553] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:26.859 [2024-11-20 14:43:38.708749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.425 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.425 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:27.425 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:27.425 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:27.425 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.425 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.425 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1619662 00:24:27.425 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1619662 /var/tmp/bdevperf.sock 00:24:27.425 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1619662 ']' 00:24:27.425 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:27.425 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:27.425 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.425 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:27.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:27.425 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:27.425 "subsystems": [ 00:24:27.425 { 00:24:27.425 "subsystem": "keyring", 00:24:27.425 "config": [ 00:24:27.425 { 00:24:27.425 "method": "keyring_file_add_key", 00:24:27.425 "params": { 00:24:27.425 "name": "key0", 00:24:27.425 "path": "/tmp/tmp.oqKdPWxmod" 00:24:27.425 } 00:24:27.425 } 00:24:27.425 ] 00:24:27.425 }, 00:24:27.425 { 00:24:27.425 "subsystem": "iobuf", 00:24:27.425 "config": [ 00:24:27.425 { 00:24:27.425 "method": "iobuf_set_options", 00:24:27.425 "params": { 00:24:27.425 "small_pool_count": 8192, 00:24:27.425 "large_pool_count": 1024, 00:24:27.425 "small_bufsize": 8192, 00:24:27.425 "large_bufsize": 135168, 00:24:27.425 "enable_numa": false 00:24:27.425 } 00:24:27.425 } 00:24:27.425 ] 00:24:27.425 }, 00:24:27.425 { 00:24:27.425 "subsystem": "sock", 00:24:27.425 "config": [ 00:24:27.425 { 00:24:27.425 "method": "sock_set_default_impl", 00:24:27.425 "params": { 00:24:27.425 "impl_name": "posix" 00:24:27.425 } 00:24:27.425 }, 00:24:27.425 { 00:24:27.425 "method": "sock_impl_set_options", 00:24:27.425 "params": { 00:24:27.425 "impl_name": "ssl", 00:24:27.425 "recv_buf_size": 4096, 00:24:27.425 "send_buf_size": 4096, 00:24:27.425 "enable_recv_pipe": true, 00:24:27.426 "enable_quickack": false, 00:24:27.426 "enable_placement_id": 0, 00:24:27.426 "enable_zerocopy_send_server": true, 00:24:27.426 "enable_zerocopy_send_client": false, 00:24:27.426 "zerocopy_threshold": 0, 00:24:27.426 "tls_version": 0, 00:24:27.426 "enable_ktls": false 00:24:27.426 } 00:24:27.426 }, 00:24:27.426 { 00:24:27.426 "method": "sock_impl_set_options", 00:24:27.426 "params": { 00:24:27.426 "impl_name": "posix", 00:24:27.426 "recv_buf_size": 2097152, 00:24:27.426 "send_buf_size": 2097152, 00:24:27.426 "enable_recv_pipe": true, 00:24:27.426 "enable_quickack": false, 00:24:27.426 "enable_placement_id": 0, 00:24:27.426 "enable_zerocopy_send_server": true, 00:24:27.426 "enable_zerocopy_send_client": false, 00:24:27.426 "zerocopy_threshold": 0, 00:24:27.426 "tls_version": 0, 00:24:27.426 "enable_ktls": false 00:24:27.426 } 00:24:27.426 } 00:24:27.426 ] 00:24:27.426 }, 00:24:27.426 { 00:24:27.426 "subsystem": "vmd", 00:24:27.426 "config": [] 00:24:27.426 }, 00:24:27.426 { 00:24:27.426 "subsystem": "accel", 00:24:27.426 "config": [ 00:24:27.426 { 00:24:27.426 "method": "accel_set_options", 00:24:27.426 "params": { 00:24:27.426 "small_cache_size": 128, 00:24:27.426 "large_cache_size": 16, 00:24:27.426 "task_count": 2048, 00:24:27.426 "sequence_count": 2048, 00:24:27.426 "buf_count": 2048 00:24:27.426 } 00:24:27.426 } 00:24:27.426 ] 00:24:27.426 }, 00:24:27.426 { 00:24:27.426 "subsystem": "bdev", 00:24:27.426 "config": [ 00:24:27.426 { 00:24:27.426 "method": "bdev_set_options", 00:24:27.426 "params": { 00:24:27.426 "bdev_io_pool_size": 65535, 00:24:27.426 "bdev_io_cache_size": 256, 00:24:27.426 "bdev_auto_examine": true, 00:24:27.426 "iobuf_small_cache_size": 128, 00:24:27.426 "iobuf_large_cache_size": 16 00:24:27.426 } 00:24:27.426 }, 00:24:27.426 { 00:24:27.426 "method": "bdev_raid_set_options", 00:24:27.426 "params": { 00:24:27.426 "process_window_size_kb": 1024, 00:24:27.426 "process_max_bandwidth_mb_sec": 0 00:24:27.426 } 00:24:27.426 }, 00:24:27.426 { 00:24:27.426 "method": "bdev_iscsi_set_options", 00:24:27.426 "params": { 00:24:27.426 "timeout_sec": 30 00:24:27.426 } 00:24:27.426 }, 00:24:27.426 { 00:24:27.426 "method": "bdev_nvme_set_options", 00:24:27.426 "params": { 00:24:27.426 "action_on_timeout": "none", 00:24:27.426 "timeout_us": 0, 00:24:27.426 "timeout_admin_us": 0, 00:24:27.426 "keep_alive_timeout_ms": 10000, 00:24:27.426 "arbitration_burst": 0, 00:24:27.426 "low_priority_weight": 0, 00:24:27.426 "medium_priority_weight": 0, 00:24:27.426 "high_priority_weight": 0, 00:24:27.426 "nvme_adminq_poll_period_us": 10000, 00:24:27.426 "nvme_ioq_poll_period_us": 0, 00:24:27.426 "io_queue_requests": 512, 00:24:27.426 "delay_cmd_submit": true, 00:24:27.426 "transport_retry_count": 4, 00:24:27.426 "bdev_retry_count": 3, 00:24:27.426 "transport_ack_timeout": 0, 00:24:27.426 "ctrlr_loss_timeout_sec": 0, 00:24:27.426 "reconnect_delay_sec": 0, 00:24:27.426 "fast_io_fail_timeout_sec": 0, 00:24:27.426 "disable_auto_failback": false, 00:24:27.426 "generate_uuids": false, 00:24:27.426 "transport_tos": 0, 00:24:27.426 "nvme_error_stat": false, 00:24:27.426 "rdma_srq_size": 0, 00:24:27.426 "io_path_stat": false, 00:24:27.426 "allow_accel_sequence": false, 00:24:27.426 "rdma_max_cq_size": 0, 00:24:27.426 "rdma_cm_event_timeout_ms": 0, 00:24:27.426 "dhchap_digests": [ 00:24:27.426 "sha256", 00:24:27.426 "sha384", 00:24:27.426 "sha512" 00:24:27.426 ], 00:24:27.426 "dhchap_dhgroups": [ 00:24:27.426 "null", 00:24:27.426 "ffdhe2048", 00:24:27.426 "ffdhe3072", 00:24:27.426 "ffdhe4096", 00:24:27.426 "ffdhe6144", 00:24:27.426 "ffdhe8192" 00:24:27.426 ] 00:24:27.426 } 00:24:27.426 }, 00:24:27.426 { 00:24:27.426 "method": "bdev_nvme_attach_controller", 00:24:27.426 "params": { 00:24:27.426 "name": "nvme0", 00:24:27.426 "trtype": "TCP", 00:24:27.426 "adrfam": "IPv4", 00:24:27.426 "traddr": "10.0.0.2", 00:24:27.426 "trsvcid": "4420", 00:24:27.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.426 "prchk_reftag": false, 00:24:27.426 "prchk_guard": false, 00:24:27.426 "ctrlr_loss_timeout_sec": 0, 00:24:27.426 "reconnect_delay_sec": 0, 00:24:27.426 "fast_io_fail_timeout_sec": 0, 00:24:27.426 "psk": "key0", 00:24:27.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:27.426 "hdgst": false, 00:24:27.426 "ddgst": false, 00:24:27.426 "multipath": "multipath" 00:24:27.426 } 00:24:27.426 }, 00:24:27.426 { 00:24:27.426 "method": "bdev_nvme_set_hotplug", 00:24:27.426 "params": { 00:24:27.426 "period_us": 100000, 00:24:27.426 "enable": false 00:24:27.426 } 00:24:27.426 }, 00:24:27.426 { 00:24:27.426 "method": "bdev_enable_histogram", 00:24:27.426 "params": { 00:24:27.426 "name": "nvme0n1", 00:24:27.426 "enable": true 00:24:27.426 } 00:24:27.426 }, 00:24:27.426 { 00:24:27.426 "method": "bdev_wait_for_examine" 00:24:27.426 } 00:24:27.426 ] 00:24:27.426 }, 00:24:27.426 { 00:24:27.426 "subsystem": "nbd", 00:24:27.426 "config": [] 00:24:27.426 } 00:24:27.426 ] 00:24:27.426 }' 00:24:27.426 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.426 14:43:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.426 [2024-11-20 14:43:39.277925] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:24:27.426 [2024-11-20 14:43:39.277980] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1619662 ] 00:24:27.426 [2024-11-20 14:43:39.351094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.685 [2024-11-20 14:43:39.392932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.685 [2024-11-20 14:43:39.547739] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:28.252 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.252 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:28.252 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:28.252 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:28.511 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.511 14:43:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:28.511 Running I/O for 1 seconds... 00:24:29.888 5066.00 IOPS, 19.79 MiB/s 00:24:29.888 Latency(us) 00:24:29.888 [2024-11-20T13:43:41.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.888 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:29.888 Verification LBA range: start 0x0 length 0x2000 00:24:29.888 nvme0n1 : 1.02 5104.53 19.94 0.00 0.00 24885.94 6696.07 27468.13 00:24:29.888 [2024-11-20T13:43:41.846Z] =================================================================================================================== 00:24:29.888 [2024-11-20T13:43:41.846Z] Total : 5104.53 19.94 0.00 0.00 24885.94 6696.07 27468.13 00:24:29.888 { 00:24:29.888 "results": [ 00:24:29.888 { 00:24:29.888 "job": "nvme0n1", 00:24:29.888 "core_mask": "0x2", 00:24:29.888 "workload": "verify", 00:24:29.888 "status": "finished", 00:24:29.888 "verify_range": { 00:24:29.888 "start": 0, 00:24:29.888 "length": 8192 00:24:29.888 }, 00:24:29.888 "queue_depth": 128, 00:24:29.888 "io_size": 4096, 00:24:29.888 "runtime": 1.017528, 00:24:29.888 "iops": 5104.527836089032, 00:24:29.888 "mibps": 19.93956185972278, 00:24:29.888 "io_failed": 0, 00:24:29.888 "io_timeout": 0, 00:24:29.888 "avg_latency_us": 24885.94124993722, 00:24:29.888 "min_latency_us": 6696.069565217392, 00:24:29.888 "max_latency_us": 27468.132173913044 00:24:29.888 } 00:24:29.888 ], 00:24:29.888 "core_count": 1 00:24:29.888 } 00:24:29.888 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:29.888 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:29.888 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:29.888 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:29.888 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:29.888 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:29.889 nvmf_trace.0 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1619662 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1619662 ']' 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1619662 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1619662 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1619662' 00:24:29.889 killing process with pid 1619662 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1619662 00:24:29.889 Received shutdown signal, test time was about 1.000000 seconds 00:24:29.889 00:24:29.889 Latency(us) 00:24:29.889 [2024-11-20T13:43:41.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.889 [2024-11-20T13:43:41.847Z] =================================================================================================================== 00:24:29.889 [2024-11-20T13:43:41.847Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1619662 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:29.889 rmmod nvme_tcp 00:24:29.889 rmmod nvme_fabrics 00:24:29.889 rmmod nvme_keyring 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1619419 ']' 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1619419 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1619419 ']' 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1619419 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.889 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1619419 00:24:30.148 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:30.149 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:30.149 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1619419' 00:24:30.149 killing process with pid 1619419 00:24:30.149 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1619419 00:24:30.149 14:43:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1619419 00:24:30.149 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:30.149 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:30.149 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:30.149 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:30.149 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:30.149 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:30.149 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:30.149 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:30.149 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:30.149 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.149 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.149 14:43:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.TrQV1IGQK4 /tmp/tmp.6h7yzah5HW /tmp/tmp.oqKdPWxmod 00:24:32.685 00:24:32.685 real 1m20.827s 00:24:32.685 user 2m4.379s 00:24:32.685 sys 0m29.697s 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.685 ************************************ 00:24:32.685 END TEST nvmf_tls 00:24:32.685 ************************************ 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:32.685 ************************************ 00:24:32.685 START TEST nvmf_fips 00:24:32.685 ************************************ 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:32.685 * Looking for test storage... 00:24:32.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:32.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.685 --rc genhtml_branch_coverage=1 00:24:32.685 --rc genhtml_function_coverage=1 00:24:32.685 --rc genhtml_legend=1 00:24:32.685 --rc geninfo_all_blocks=1 00:24:32.685 --rc geninfo_unexecuted_blocks=1 00:24:32.685 00:24:32.685 ' 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:32.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.685 --rc genhtml_branch_coverage=1 00:24:32.685 --rc genhtml_function_coverage=1 00:24:32.685 --rc genhtml_legend=1 00:24:32.685 --rc geninfo_all_blocks=1 00:24:32.685 --rc geninfo_unexecuted_blocks=1 00:24:32.685 00:24:32.685 ' 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:32.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.685 --rc genhtml_branch_coverage=1 00:24:32.685 --rc genhtml_function_coverage=1 00:24:32.685 --rc genhtml_legend=1 00:24:32.685 --rc geninfo_all_blocks=1 00:24:32.685 --rc geninfo_unexecuted_blocks=1 00:24:32.685 00:24:32.685 ' 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:32.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.685 --rc genhtml_branch_coverage=1 00:24:32.685 --rc genhtml_function_coverage=1 00:24:32.685 --rc genhtml_legend=1 00:24:32.685 --rc geninfo_all_blocks=1 00:24:32.685 --rc geninfo_unexecuted_blocks=1 00:24:32.685 00:24:32.685 ' 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.685 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:32.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:32.686 Error setting digest 00:24:32.686 40226376FB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:32.686 40226376FB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.686 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.687 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.687 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:32.687 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:32.687 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:32.687 14:43:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:39.259 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:39.259 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:39.259 Found net devices under 0000:86:00.0: cvl_0_0 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:39.259 Found net devices under 0000:86:00.1: cvl_0_1 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:39.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:24:39.259 00:24:39.259 --- 10.0.0.2 ping statistics --- 00:24:39.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.259 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:24:39.259 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:24:39.259 00:24:39.259 --- 10.0.0.1 ping statistics --- 00:24:39.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.259 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1623630 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1623630 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1623630 ']' 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.260 14:43:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:39.260 [2024-11-20 14:43:50.569809] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:24:39.260 [2024-11-20 14:43:50.569857] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.260 [2024-11-20 14:43:50.649973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.260 [2024-11-20 14:43:50.689688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.260 [2024-11-20 14:43:50.689724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.260 [2024-11-20 14:43:50.689731] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.260 [2024-11-20 14:43:50.689737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.260 [2024-11-20 14:43:50.689742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.260 [2024-11-20 14:43:50.690332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.519 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.519 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:39.519 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:39.519 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:39.519 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:39.519 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.519 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:39.519 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:39.519 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:39.519 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Ims 00:24:39.519 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:39.519 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Ims 00:24:39.519 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Ims 00:24:39.519 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Ims 00:24:39.519 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:39.779 [2024-11-20 14:43:51.601954] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.779 [2024-11-20 14:43:51.617958] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:39.779 [2024-11-20 14:43:51.618166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.779 malloc0 00:24:39.779 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:39.779 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1623756 00:24:39.779 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:39.779 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1623756 /var/tmp/bdevperf.sock 00:24:39.779 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1623756 ']' 00:24:39.779 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.779 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.779 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.779 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.779 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:40.039 [2024-11-20 14:43:51.748084] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:24:40.039 [2024-11-20 14:43:51.748137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1623756 ] 00:24:40.039 [2024-11-20 14:43:51.811741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.039 [2024-11-20 14:43:51.854631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.039 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.039 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:40.039 14:43:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Ims 00:24:40.298 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:40.557 [2024-11-20 14:43:52.324115] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:40.557 TLSTESTn1 00:24:40.557 14:43:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:40.557 Running I/O for 10 seconds... 00:24:42.873 5147.00 IOPS, 20.11 MiB/s [2024-11-20T13:43:55.768Z] 5287.50 IOPS, 20.65 MiB/s [2024-11-20T13:43:56.704Z] 5274.67 IOPS, 20.60 MiB/s [2024-11-20T13:43:57.640Z] 5183.00 IOPS, 20.25 MiB/s [2024-11-20T13:43:58.576Z] 5122.80 IOPS, 20.01 MiB/s [2024-11-20T13:43:59.953Z] 5092.67 IOPS, 19.89 MiB/s [2024-11-20T13:44:00.890Z] 5064.14 IOPS, 19.78 MiB/s [2024-11-20T13:44:01.826Z] 5022.25 IOPS, 19.62 MiB/s [2024-11-20T13:44:02.765Z] 5014.78 IOPS, 19.59 MiB/s [2024-11-20T13:44:02.765Z] 5004.00 IOPS, 19.55 MiB/s 00:24:50.807 Latency(us) 00:24:50.807 [2024-11-20T13:44:02.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.807 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:50.807 Verification LBA range: start 0x0 length 0x2000 00:24:50.807 TLSTESTn1 : 10.02 5006.68 19.56 0.00 0.00 25527.51 5841.25 49009.53 00:24:50.807 [2024-11-20T13:44:02.765Z] =================================================================================================================== 00:24:50.807 [2024-11-20T13:44:02.765Z] Total : 5006.68 19.56 0.00 0.00 25527.51 5841.25 49009.53 00:24:50.807 { 00:24:50.807 "results": [ 00:24:50.807 { 00:24:50.807 "job": "TLSTESTn1", 00:24:50.807 "core_mask": "0x4", 00:24:50.807 "workload": "verify", 00:24:50.807 "status": "finished", 00:24:50.807 "verify_range": { 00:24:50.807 "start": 0, 00:24:50.807 "length": 8192 00:24:50.807 }, 00:24:50.807 "queue_depth": 128, 00:24:50.807 "io_size": 4096, 00:24:50.807 "runtime": 10.020014, 00:24:50.807 "iops": 5006.679631385745, 00:24:50.807 "mibps": 19.557342310100566, 00:24:50.807 "io_failed": 0, 00:24:50.807 "io_timeout": 0, 00:24:50.807 "avg_latency_us": 25527.5082394888, 00:24:50.807 "min_latency_us": 5841.252173913043, 00:24:50.807 "max_latency_us": 49009.53043478261 00:24:50.807 } 00:24:50.807 ], 00:24:50.807 "core_count": 1 00:24:50.807 } 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:50.807 nvmf_trace.0 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1623756 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1623756 ']' 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1623756 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1623756 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1623756' 00:24:50.807 killing process with pid 1623756 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1623756 00:24:50.807 Received shutdown signal, test time was about 10.000000 seconds 00:24:50.807 00:24:50.807 Latency(us) 00:24:50.807 [2024-11-20T13:44:02.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.807 [2024-11-20T13:44:02.765Z] =================================================================================================================== 00:24:50.807 [2024-11-20T13:44:02.765Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:50.807 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1623756 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:51.067 rmmod nvme_tcp 00:24:51.067 rmmod nvme_fabrics 00:24:51.067 rmmod nvme_keyring 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1623630 ']' 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1623630 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1623630 ']' 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1623630 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1623630 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1623630' 00:24:51.067 killing process with pid 1623630 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1623630 00:24:51.067 14:44:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1623630 00:24:51.327 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:51.327 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:51.327 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:51.327 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:51.327 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:51.327 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:51.327 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:51.327 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:51.327 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:51.327 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.327 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.327 14:44:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.864 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:53.864 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Ims 00:24:53.864 00:24:53.864 real 0m21.030s 00:24:53.864 user 0m21.437s 00:24:53.864 sys 0m10.234s 00:24:53.864 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:53.864 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:53.864 ************************************ 00:24:53.864 END TEST nvmf_fips 00:24:53.864 ************************************ 00:24:53.864 14:44:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:53.864 14:44:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:53.865 ************************************ 00:24:53.865 START TEST nvmf_control_msg_list 00:24:53.865 ************************************ 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:53.865 * Looking for test storage... 00:24:53.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:53.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.865 --rc genhtml_branch_coverage=1 00:24:53.865 --rc genhtml_function_coverage=1 00:24:53.865 --rc genhtml_legend=1 00:24:53.865 --rc geninfo_all_blocks=1 00:24:53.865 --rc geninfo_unexecuted_blocks=1 00:24:53.865 00:24:53.865 ' 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:53.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.865 --rc genhtml_branch_coverage=1 00:24:53.865 --rc genhtml_function_coverage=1 00:24:53.865 --rc genhtml_legend=1 00:24:53.865 --rc geninfo_all_blocks=1 00:24:53.865 --rc geninfo_unexecuted_blocks=1 00:24:53.865 00:24:53.865 ' 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:53.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.865 --rc genhtml_branch_coverage=1 00:24:53.865 --rc genhtml_function_coverage=1 00:24:53.865 --rc genhtml_legend=1 00:24:53.865 --rc geninfo_all_blocks=1 00:24:53.865 --rc geninfo_unexecuted_blocks=1 00:24:53.865 00:24:53.865 ' 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:53.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.865 --rc genhtml_branch_coverage=1 00:24:53.865 --rc genhtml_function_coverage=1 00:24:53.865 --rc genhtml_legend=1 00:24:53.865 --rc geninfo_all_blocks=1 00:24:53.865 --rc geninfo_unexecuted_blocks=1 00:24:53.865 00:24:53.865 ' 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.865 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.866 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.866 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:53.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:53.866 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:53.866 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:53.866 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:53.866 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:53.866 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:53.866 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.866 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:53.866 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:53.866 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:53.866 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.866 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.866 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.866 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:53.866 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:53.866 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:53.866 14:44:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.438 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:00.439 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:00.439 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:00.439 Found net devices under 0000:86:00.0: cvl_0_0 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:00.439 Found net devices under 0000:86:00.1: cvl_0_1 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:00.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:25:00.439 00:25:00.439 --- 10.0.0.2 ping statistics --- 00:25:00.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.439 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:25:00.439 00:25:00.439 --- 10.0.0.1 ping statistics --- 00:25:00.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.439 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1629086 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1629086 00:25:00.439 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1629086 ']' 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.440 [2024-11-20 14:44:11.472272] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:25:00.440 [2024-11-20 14:44:11.472324] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.440 [2024-11-20 14:44:11.551843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.440 [2024-11-20 14:44:11.591002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.440 [2024-11-20 14:44:11.591037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.440 [2024-11-20 14:44:11.591044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.440 [2024-11-20 14:44:11.591049] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.440 [2024-11-20 14:44:11.591054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.440 [2024-11-20 14:44:11.591625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.440 [2024-11-20 14:44:11.740561] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.440 Malloc0 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.440 [2024-11-20 14:44:11.785018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1629198 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1629200 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1629202 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1629198 00:25:00.440 14:44:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:00.440 [2024-11-20 14:44:11.863402] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:00.440 [2024-11-20 14:44:11.873436] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:00.440 [2024-11-20 14:44:11.873588] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:01.007 Initializing NVMe Controllers 00:25:01.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:01.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:01.007 Initialization complete. Launching workers. 00:25:01.007 ======================================================== 00:25:01.007 Latency(us) 00:25:01.007 Device Information : IOPS MiB/s Average min max 00:25:01.007 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 41052.43 40692.84 41955.25 00:25:01.007 ======================================================== 00:25:01.007 Total : 25.00 0.10 41052.43 40692.84 41955.25 00:25:01.007 00:25:01.007 14:44:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1629200 00:25:01.267 Initializing NVMe Controllers 00:25:01.267 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:01.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:01.267 Initialization complete. Launching workers. 00:25:01.267 ======================================================== 00:25:01.267 Latency(us) 00:25:01.267 Device Information : IOPS MiB/s Average min max 00:25:01.267 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6142.00 23.99 165.54 123.31 40840.52 00:25:01.267 ======================================================== 00:25:01.267 Total : 6142.00 23.99 165.54 123.31 40840.52 00:25:01.267 00:25:01.267 Initializing NVMe Controllers 00:25:01.267 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:01.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:01.267 Initialization complete. Launching workers. 00:25:01.267 ======================================================== 00:25:01.267 Latency(us) 00:25:01.267 Device Information : IOPS MiB/s Average min max 00:25:01.267 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6396.00 24.98 155.99 127.76 371.68 00:25:01.267 ======================================================== 00:25:01.267 Total : 6396.00 24.98 155.99 127.76 371.68 00:25:01.267 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1629202 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:01.267 rmmod nvme_tcp 00:25:01.267 rmmod nvme_fabrics 00:25:01.267 rmmod nvme_keyring 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1629086 ']' 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1629086 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1629086 ']' 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1629086 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1629086 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1629086' 00:25:01.267 killing process with pid 1629086 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1629086 00:25:01.267 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1629086 00:25:01.527 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:01.527 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:01.527 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:01.527 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:01.527 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:01.527 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:01.527 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:01.527 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:01.527 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:01.527 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.527 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.527 14:44:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.431 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:03.431 00:25:03.431 real 0m10.088s 00:25:03.431 user 0m6.451s 00:25:03.431 sys 0m5.590s 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:03.690 ************************************ 00:25:03.690 END TEST nvmf_control_msg_list 00:25:03.690 ************************************ 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:03.690 ************************************ 00:25:03.690 START TEST nvmf_wait_for_buf 00:25:03.690 ************************************ 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:03.690 * Looking for test storage... 00:25:03.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:03.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.690 --rc genhtml_branch_coverage=1 00:25:03.690 --rc genhtml_function_coverage=1 00:25:03.690 --rc genhtml_legend=1 00:25:03.690 --rc geninfo_all_blocks=1 00:25:03.690 --rc geninfo_unexecuted_blocks=1 00:25:03.690 00:25:03.690 ' 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:03.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.690 --rc genhtml_branch_coverage=1 00:25:03.690 --rc genhtml_function_coverage=1 00:25:03.690 --rc genhtml_legend=1 00:25:03.690 --rc geninfo_all_blocks=1 00:25:03.690 --rc geninfo_unexecuted_blocks=1 00:25:03.690 00:25:03.690 ' 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:03.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.690 --rc genhtml_branch_coverage=1 00:25:03.690 --rc genhtml_function_coverage=1 00:25:03.690 --rc genhtml_legend=1 00:25:03.690 --rc geninfo_all_blocks=1 00:25:03.690 --rc geninfo_unexecuted_blocks=1 00:25:03.690 00:25:03.690 ' 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:03.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.690 --rc genhtml_branch_coverage=1 00:25:03.690 --rc genhtml_function_coverage=1 00:25:03.690 --rc genhtml_legend=1 00:25:03.690 --rc geninfo_all_blocks=1 00:25:03.690 --rc geninfo_unexecuted_blocks=1 00:25:03.690 00:25:03.690 ' 00:25:03.690 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.691 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:03.691 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.691 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.691 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.691 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.691 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.691 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.691 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.691 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.691 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:03.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:03.949 14:44:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:10.516 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:10.516 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:10.516 Found net devices under 0000:86:00.0: cvl_0_0 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:10.516 Found net devices under 0000:86:00.1: cvl_0_1 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:10.516 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:10.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:25:10.517 00:25:10.517 --- 10.0.0.2 ping statistics --- 00:25:10.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.517 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:25:10.517 00:25:10.517 --- 10.0.0.1 ping statistics --- 00:25:10.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.517 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1632868 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1632868 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1632868 ']' 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.517 [2024-11-20 14:44:21.659111] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:25:10.517 [2024-11-20 14:44:21.659158] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.517 [2024-11-20 14:44:21.740346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.517 [2024-11-20 14:44:21.782190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.517 [2024-11-20 14:44:21.782225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.517 [2024-11-20 14:44:21.782232] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.517 [2024-11-20 14:44:21.782238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.517 [2024-11-20 14:44:21.782243] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.517 [2024-11-20 14:44:21.782820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.517 Malloc0 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.517 [2024-11-20 14:44:21.966822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.517 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.518 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:10.518 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.518 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.518 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.518 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:10.518 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.518 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.518 [2024-11-20 14:44:21.995012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.518 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.518 14:44:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:10.518 [2024-11-20 14:44:22.078904] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:11.895 Initializing NVMe Controllers 00:25:11.895 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:11.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:11.895 Initialization complete. Launching workers. 00:25:11.895 ======================================================== 00:25:11.895 Latency(us) 00:25:11.895 Device Information : IOPS MiB/s Average min max 00:25:11.895 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33538.75 30405.60 71071.88 00:25:11.895 ======================================================== 00:25:11.895 Total : 124.00 15.50 33538.75 30405.60 71071.88 00:25:11.895 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:11.895 rmmod nvme_tcp 00:25:11.895 rmmod nvme_fabrics 00:25:11.895 rmmod nvme_keyring 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1632868 ']' 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1632868 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1632868 ']' 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1632868 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1632868 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1632868' 00:25:11.895 killing process with pid 1632868 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1632868 00:25:11.895 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1632868 00:25:12.155 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:12.155 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:12.155 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:12.155 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:12.155 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:12.155 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:12.155 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:12.155 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:12.155 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:12.155 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.155 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.155 14:44:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.061 14:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:14.061 00:25:14.061 real 0m10.486s 00:25:14.061 user 0m4.052s 00:25:14.061 sys 0m4.903s 00:25:14.061 14:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:14.061 14:44:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:14.061 ************************************ 00:25:14.061 END TEST nvmf_wait_for_buf 00:25:14.061 ************************************ 00:25:14.061 14:44:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:25:14.061 14:44:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:25:14.061 14:44:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:25:14.061 14:44:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:25:14.061 14:44:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:25:14.061 14:44:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:20.633 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:20.633 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:20.633 Found net devices under 0000:86:00.0: cvl_0_0 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:20.633 Found net devices under 0000:86:00.1: cvl_0_1 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:20.633 ************************************ 00:25:20.633 START TEST nvmf_perf_adq 00:25:20.633 ************************************ 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:20.633 * Looking for test storage... 00:25:20.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:25:20.633 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:20.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.634 --rc genhtml_branch_coverage=1 00:25:20.634 --rc genhtml_function_coverage=1 00:25:20.634 --rc genhtml_legend=1 00:25:20.634 --rc geninfo_all_blocks=1 00:25:20.634 --rc geninfo_unexecuted_blocks=1 00:25:20.634 00:25:20.634 ' 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:20.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.634 --rc genhtml_branch_coverage=1 00:25:20.634 --rc genhtml_function_coverage=1 00:25:20.634 --rc genhtml_legend=1 00:25:20.634 --rc geninfo_all_blocks=1 00:25:20.634 --rc geninfo_unexecuted_blocks=1 00:25:20.634 00:25:20.634 ' 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:20.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.634 --rc genhtml_branch_coverage=1 00:25:20.634 --rc genhtml_function_coverage=1 00:25:20.634 --rc genhtml_legend=1 00:25:20.634 --rc geninfo_all_blocks=1 00:25:20.634 --rc geninfo_unexecuted_blocks=1 00:25:20.634 00:25:20.634 ' 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:20.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.634 --rc genhtml_branch_coverage=1 00:25:20.634 --rc genhtml_function_coverage=1 00:25:20.634 --rc genhtml_legend=1 00:25:20.634 --rc geninfo_all_blocks=1 00:25:20.634 --rc geninfo_unexecuted_blocks=1 00:25:20.634 00:25:20.634 ' 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:20.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:20.634 14:44:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:25.946 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:25.946 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.946 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:25.947 Found net devices under 0000:86:00.0: cvl_0_0 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:25.947 Found net devices under 0000:86:00.1: cvl_0_1 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:25:25.947 14:44:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:25:26.948 14:44:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:28.853 14:44:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:34.129 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:34.130 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:34.130 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:34.130 Found net devices under 0000:86:00.0: cvl_0_0 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:34.130 Found net devices under 0000:86:00.1: cvl_0_1 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:34.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:25:34.130 00:25:34.130 --- 10.0.0.2 ping statistics --- 00:25:34.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.130 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:34.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:25:34.130 00:25:34.130 --- 10.0.0.1 ping statistics --- 00:25:34.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.130 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:34.130 14:44:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:34.130 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:34.130 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:34.130 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:34.130 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:34.130 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1643097 00:25:34.131 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1643097 00:25:34.131 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:34.131 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1643097 ']' 00:25:34.131 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.131 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:34.131 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.131 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:34.131 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:34.131 [2024-11-20 14:44:46.066449] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:25:34.131 [2024-11-20 14:44:46.066497] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.390 [2024-11-20 14:44:46.147048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:34.390 [2024-11-20 14:44:46.190885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.390 [2024-11-20 14:44:46.190923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.390 [2024-11-20 14:44:46.190930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.390 [2024-11-20 14:44:46.190936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.390 [2024-11-20 14:44:46.190942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.390 [2024-11-20 14:44:46.192578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.390 [2024-11-20 14:44:46.192697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:34.390 [2024-11-20 14:44:46.192701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.390 [2024-11-20 14:44:46.192682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.390 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:34.649 [2024-11-20 14:44:46.422877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:34.649 Malloc1 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:34.649 [2024-11-20 14:44:46.485669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1643332 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:25:34.649 14:44:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:36.554 14:44:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:25:36.554 14:44:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.554 14:44:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:36.812 14:44:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.812 14:44:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:25:36.812 "tick_rate": 2300000000, 00:25:36.812 "poll_groups": [ 00:25:36.812 { 00:25:36.812 "name": "nvmf_tgt_poll_group_000", 00:25:36.812 "admin_qpairs": 1, 00:25:36.812 "io_qpairs": 1, 00:25:36.812 "current_admin_qpairs": 1, 00:25:36.812 "current_io_qpairs": 1, 00:25:36.812 "pending_bdev_io": 0, 00:25:36.812 "completed_nvme_io": 19736, 00:25:36.812 "transports": [ 00:25:36.812 { 00:25:36.812 "trtype": "TCP" 00:25:36.812 } 00:25:36.812 ] 00:25:36.812 }, 00:25:36.812 { 00:25:36.812 "name": "nvmf_tgt_poll_group_001", 00:25:36.812 "admin_qpairs": 0, 00:25:36.812 "io_qpairs": 1, 00:25:36.812 "current_admin_qpairs": 0, 00:25:36.812 "current_io_qpairs": 1, 00:25:36.812 "pending_bdev_io": 0, 00:25:36.812 "completed_nvme_io": 19952, 00:25:36.812 "transports": [ 00:25:36.812 { 00:25:36.812 "trtype": "TCP" 00:25:36.812 } 00:25:36.812 ] 00:25:36.812 }, 00:25:36.812 { 00:25:36.812 "name": "nvmf_tgt_poll_group_002", 00:25:36.812 "admin_qpairs": 0, 00:25:36.812 "io_qpairs": 1, 00:25:36.812 "current_admin_qpairs": 0, 00:25:36.812 "current_io_qpairs": 1, 00:25:36.812 "pending_bdev_io": 0, 00:25:36.812 "completed_nvme_io": 19886, 00:25:36.812 "transports": [ 00:25:36.812 { 00:25:36.812 "trtype": "TCP" 00:25:36.812 } 00:25:36.812 ] 00:25:36.812 }, 00:25:36.812 { 00:25:36.812 "name": "nvmf_tgt_poll_group_003", 00:25:36.812 "admin_qpairs": 0, 00:25:36.812 "io_qpairs": 1, 00:25:36.812 "current_admin_qpairs": 0, 00:25:36.812 "current_io_qpairs": 1, 00:25:36.812 "pending_bdev_io": 0, 00:25:36.812 "completed_nvme_io": 19581, 00:25:36.812 "transports": [ 00:25:36.812 { 00:25:36.812 "trtype": "TCP" 00:25:36.812 } 00:25:36.812 ] 00:25:36.812 } 00:25:36.812 ] 00:25:36.812 }' 00:25:36.812 14:44:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:36.812 14:44:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:25:36.812 14:44:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:25:36.812 14:44:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:25:36.812 14:44:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1643332 00:25:44.935 Initializing NVMe Controllers 00:25:44.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:44.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:44.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:44.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:44.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:44.935 Initialization complete. Launching workers. 00:25:44.935 ======================================================== 00:25:44.935 Latency(us) 00:25:44.935 Device Information : IOPS MiB/s Average min max 00:25:44.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10429.64 40.74 6135.53 2392.88 10202.72 00:25:44.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10589.24 41.36 6043.87 1792.56 10642.46 00:25:44.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10556.54 41.24 6062.39 2002.06 10207.22 00:25:44.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10519.74 41.09 6084.75 2488.93 10244.65 00:25:44.935 ======================================================== 00:25:44.935 Total : 42095.16 164.43 6081.44 1792.56 10642.46 00:25:44.935 00:25:44.935 [2024-11-20 14:44:56.642608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67500 is same with the state(6) to be set 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:44.935 rmmod nvme_tcp 00:25:44.935 rmmod nvme_fabrics 00:25:44.935 rmmod nvme_keyring 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1643097 ']' 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1643097 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1643097 ']' 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1643097 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1643097 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1643097' 00:25:44.935 killing process with pid 1643097 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1643097 00:25:44.935 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1643097 00:25:45.195 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:45.195 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:45.195 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:45.195 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:25:45.195 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:25:45.195 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:45.195 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:25:45.195 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:45.195 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:45.195 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.195 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.195 14:44:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.102 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:47.102 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:25:47.102 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:25:47.102 14:44:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:25:48.481 14:45:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:50.387 14:45:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:25:55.664 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:25:55.664 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:55.664 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:55.664 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:55.665 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:55.665 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:55.665 Found net devices under 0000:86:00.0: cvl_0_0 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:55.665 Found net devices under 0000:86:00.1: cvl_0_1 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.665 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:55.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:25:55.666 00:25:55.666 --- 10.0.0.2 ping statistics --- 00:25:55.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.666 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:25:55.666 00:25:55.666 --- 10.0.0.1 ping statistics --- 00:25:55.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.666 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:25:55.666 net.core.busy_poll = 1 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:25:55.666 net.core.busy_read = 1 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:55.666 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1647067 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1647067 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1647067 ']' 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:55.925 [2024-11-20 14:45:07.673880] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:25:55.925 [2024-11-20 14:45:07.673926] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.925 [2024-11-20 14:45:07.752737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:55.925 [2024-11-20 14:45:07.797647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.925 [2024-11-20 14:45:07.797685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.925 [2024-11-20 14:45:07.797692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.925 [2024-11-20 14:45:07.797699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.925 [2024-11-20 14:45:07.797704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.925 [2024-11-20 14:45:07.799307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.925 [2024-11-20 14:45:07.799428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.925 [2024-11-20 14:45:07.799535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.925 [2024-11-20 14:45:07.799536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.925 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:56.184 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.184 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:56.184 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:25:56.184 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.184 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:56.184 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.184 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:56.184 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.184 14:45:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:56.184 [2024-11-20 14:45:08.014433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:56.184 Malloc1 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:56.184 [2024-11-20 14:45:08.073177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1647206 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:25:56.184 14:45:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:58.719 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:25:58.719 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.719 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:58.719 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.719 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:25:58.719 "tick_rate": 2300000000, 00:25:58.719 "poll_groups": [ 00:25:58.719 { 00:25:58.719 "name": "nvmf_tgt_poll_group_000", 00:25:58.719 "admin_qpairs": 1, 00:25:58.719 "io_qpairs": 3, 00:25:58.719 "current_admin_qpairs": 1, 00:25:58.719 "current_io_qpairs": 3, 00:25:58.719 "pending_bdev_io": 0, 00:25:58.719 "completed_nvme_io": 28022, 00:25:58.719 "transports": [ 00:25:58.719 { 00:25:58.719 "trtype": "TCP" 00:25:58.719 } 00:25:58.719 ] 00:25:58.719 }, 00:25:58.719 { 00:25:58.719 "name": "nvmf_tgt_poll_group_001", 00:25:58.719 "admin_qpairs": 0, 00:25:58.719 "io_qpairs": 1, 00:25:58.719 "current_admin_qpairs": 0, 00:25:58.719 "current_io_qpairs": 1, 00:25:58.719 "pending_bdev_io": 0, 00:25:58.719 "completed_nvme_io": 27252, 00:25:58.719 "transports": [ 00:25:58.719 { 00:25:58.719 "trtype": "TCP" 00:25:58.719 } 00:25:58.719 ] 00:25:58.719 }, 00:25:58.719 { 00:25:58.719 "name": "nvmf_tgt_poll_group_002", 00:25:58.719 "admin_qpairs": 0, 00:25:58.719 "io_qpairs": 0, 00:25:58.719 "current_admin_qpairs": 0, 00:25:58.719 "current_io_qpairs": 0, 00:25:58.719 "pending_bdev_io": 0, 00:25:58.719 "completed_nvme_io": 0, 00:25:58.719 "transports": [ 00:25:58.719 { 00:25:58.719 "trtype": "TCP" 00:25:58.719 } 00:25:58.719 ] 00:25:58.719 }, 00:25:58.719 { 00:25:58.719 "name": "nvmf_tgt_poll_group_003", 00:25:58.719 "admin_qpairs": 0, 00:25:58.719 "io_qpairs": 0, 00:25:58.719 "current_admin_qpairs": 0, 00:25:58.719 "current_io_qpairs": 0, 00:25:58.719 "pending_bdev_io": 0, 00:25:58.719 "completed_nvme_io": 0, 00:25:58.719 "transports": [ 00:25:58.719 { 00:25:58.719 "trtype": "TCP" 00:25:58.719 } 00:25:58.719 ] 00:25:58.719 } 00:25:58.719 ] 00:25:58.719 }' 00:25:58.719 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:25:58.719 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:25:58.719 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:25:58.719 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:25:58.719 14:45:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1647206 00:26:06.835 Initializing NVMe Controllers 00:26:06.835 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:06.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:06.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:06.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:06.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:06.835 Initialization complete. Launching workers. 00:26:06.835 ======================================================== 00:26:06.835 Latency(us) 00:26:06.835 Device Information : IOPS MiB/s Average min max 00:26:06.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4623.80 18.06 13843.10 923.05 59974.96 00:26:06.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14965.30 58.46 4276.07 1904.45 7119.91 00:26:06.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4891.60 19.11 13084.09 1882.27 60127.25 00:26:06.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5056.10 19.75 12655.97 1540.77 59932.66 00:26:06.835 ======================================================== 00:26:06.835 Total : 29536.80 115.38 8666.90 923.05 60127.25 00:26:06.835 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:06.835 rmmod nvme_tcp 00:26:06.835 rmmod nvme_fabrics 00:26:06.835 rmmod nvme_keyring 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1647067 ']' 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1647067 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1647067 ']' 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1647067 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1647067 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1647067' 00:26:06.835 killing process with pid 1647067 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1647067 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1647067 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:06.835 14:45:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.130 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:10.130 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:26:10.130 00:26:10.130 real 0m49.973s 00:26:10.130 user 2m43.819s 00:26:10.130 sys 0m10.403s 00:26:10.130 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:10.130 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:10.130 ************************************ 00:26:10.130 END TEST nvmf_perf_adq 00:26:10.130 ************************************ 00:26:10.130 14:45:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:10.130 14:45:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:10.130 14:45:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:10.130 14:45:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:10.130 ************************************ 00:26:10.130 START TEST nvmf_shutdown 00:26:10.130 ************************************ 00:26:10.130 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:10.130 * Looking for test storage... 00:26:10.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:10.130 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:10.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.131 --rc genhtml_branch_coverage=1 00:26:10.131 --rc genhtml_function_coverage=1 00:26:10.131 --rc genhtml_legend=1 00:26:10.131 --rc geninfo_all_blocks=1 00:26:10.131 --rc geninfo_unexecuted_blocks=1 00:26:10.131 00:26:10.131 ' 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:10.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.131 --rc genhtml_branch_coverage=1 00:26:10.131 --rc genhtml_function_coverage=1 00:26:10.131 --rc genhtml_legend=1 00:26:10.131 --rc geninfo_all_blocks=1 00:26:10.131 --rc geninfo_unexecuted_blocks=1 00:26:10.131 00:26:10.131 ' 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:10.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.131 --rc genhtml_branch_coverage=1 00:26:10.131 --rc genhtml_function_coverage=1 00:26:10.131 --rc genhtml_legend=1 00:26:10.131 --rc geninfo_all_blocks=1 00:26:10.131 --rc geninfo_unexecuted_blocks=1 00:26:10.131 00:26:10.131 ' 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:10.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.131 --rc genhtml_branch_coverage=1 00:26:10.131 --rc genhtml_function_coverage=1 00:26:10.131 --rc genhtml_legend=1 00:26:10.131 --rc geninfo_all_blocks=1 00:26:10.131 --rc geninfo_unexecuted_blocks=1 00:26:10.131 00:26:10.131 ' 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:10.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:10.131 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:10.132 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:10.132 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:10.132 ************************************ 00:26:10.132 START TEST nvmf_shutdown_tc1 00:26:10.132 ************************************ 00:26:10.132 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:26:10.132 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:26:10.132 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:10.132 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:10.132 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:10.132 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:10.132 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:10.132 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:10.132 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.132 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:10.132 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.132 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:10.132 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:10.132 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:10.132 14:45:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:16.707 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:16.707 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:16.707 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:16.708 Found net devices under 0000:86:00.0: cvl_0_0 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:16.708 Found net devices under 0000:86:00.1: cvl_0_1 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:16.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:16.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:26:16.708 00:26:16.708 --- 10.0.0.2 ping statistics --- 00:26:16.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.708 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:16.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:16.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:26:16.708 00:26:16.708 --- 10.0.0.1 ping statistics --- 00:26:16.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.708 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1652988 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1652988 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1652988 ']' 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.708 14:45:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:16.708 [2024-11-20 14:45:27.974144] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:26:16.708 [2024-11-20 14:45:27.974197] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.708 [2024-11-20 14:45:28.055253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:16.708 [2024-11-20 14:45:28.098117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:16.708 [2024-11-20 14:45:28.098156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:16.708 [2024-11-20 14:45:28.098164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:16.708 [2024-11-20 14:45:28.098170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:16.708 [2024-11-20 14:45:28.098175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:16.708 [2024-11-20 14:45:28.099858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:16.708 [2024-11-20 14:45:28.099984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:16.708 [2024-11-20 14:45:28.100089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.708 [2024-11-20 14:45:28.100089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:16.708 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:16.708 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:26:16.708 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:16.708 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:16.708 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:16.708 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:16.708 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:16.708 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.708 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:16.708 [2024-11-20 14:45:28.238762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:16.708 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.709 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:16.709 Malloc1 00:26:16.709 [2024-11-20 14:45:28.343877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.709 Malloc2 00:26:16.709 Malloc3 00:26:16.709 Malloc4 00:26:16.709 Malloc5 00:26:16.709 Malloc6 00:26:16.709 Malloc7 00:26:16.709 Malloc8 00:26:16.968 Malloc9 00:26:16.968 Malloc10 00:26:16.968 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.968 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:16.968 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:16.968 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:16.968 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1653050 00:26:16.968 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1653050 /var/tmp/bdevperf.sock 00:26:16.968 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1653050 ']' 00:26:16.968 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:16.968 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:16.968 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:16.968 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.968 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:16.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:16.968 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:26:16.968 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.968 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.969 { 00:26:16.969 "params": { 00:26:16.969 "name": "Nvme$subsystem", 00:26:16.969 "trtype": "$TEST_TRANSPORT", 00:26:16.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.969 "adrfam": "ipv4", 00:26:16.969 "trsvcid": "$NVMF_PORT", 00:26:16.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.969 "hdgst": ${hdgst:-false}, 00:26:16.969 "ddgst": ${ddgst:-false} 00:26:16.969 }, 00:26:16.969 "method": "bdev_nvme_attach_controller" 00:26:16.969 } 00:26:16.969 EOF 00:26:16.969 )") 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.969 { 00:26:16.969 "params": { 00:26:16.969 "name": "Nvme$subsystem", 00:26:16.969 "trtype": "$TEST_TRANSPORT", 00:26:16.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.969 "adrfam": "ipv4", 00:26:16.969 "trsvcid": "$NVMF_PORT", 00:26:16.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.969 "hdgst": ${hdgst:-false}, 00:26:16.969 "ddgst": ${ddgst:-false} 00:26:16.969 }, 00:26:16.969 "method": "bdev_nvme_attach_controller" 00:26:16.969 } 00:26:16.969 EOF 00:26:16.969 )") 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.969 { 00:26:16.969 "params": { 00:26:16.969 "name": "Nvme$subsystem", 00:26:16.969 "trtype": "$TEST_TRANSPORT", 00:26:16.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.969 "adrfam": "ipv4", 00:26:16.969 "trsvcid": "$NVMF_PORT", 00:26:16.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.969 "hdgst": ${hdgst:-false}, 00:26:16.969 "ddgst": ${ddgst:-false} 00:26:16.969 }, 00:26:16.969 "method": "bdev_nvme_attach_controller" 00:26:16.969 } 00:26:16.969 EOF 00:26:16.969 )") 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.969 { 00:26:16.969 "params": { 00:26:16.969 "name": "Nvme$subsystem", 00:26:16.969 "trtype": "$TEST_TRANSPORT", 00:26:16.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.969 "adrfam": "ipv4", 00:26:16.969 "trsvcid": "$NVMF_PORT", 00:26:16.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.969 "hdgst": ${hdgst:-false}, 00:26:16.969 "ddgst": ${ddgst:-false} 00:26:16.969 }, 00:26:16.969 "method": "bdev_nvme_attach_controller" 00:26:16.969 } 00:26:16.969 EOF 00:26:16.969 )") 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.969 { 00:26:16.969 "params": { 00:26:16.969 "name": "Nvme$subsystem", 00:26:16.969 "trtype": "$TEST_TRANSPORT", 00:26:16.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.969 "adrfam": "ipv4", 00:26:16.969 "trsvcid": "$NVMF_PORT", 00:26:16.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.969 "hdgst": ${hdgst:-false}, 00:26:16.969 "ddgst": ${ddgst:-false} 00:26:16.969 }, 00:26:16.969 "method": "bdev_nvme_attach_controller" 00:26:16.969 } 00:26:16.969 EOF 00:26:16.969 )") 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.969 { 00:26:16.969 "params": { 00:26:16.969 "name": "Nvme$subsystem", 00:26:16.969 "trtype": "$TEST_TRANSPORT", 00:26:16.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.969 "adrfam": "ipv4", 00:26:16.969 "trsvcid": "$NVMF_PORT", 00:26:16.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.969 "hdgst": ${hdgst:-false}, 00:26:16.969 "ddgst": ${ddgst:-false} 00:26:16.969 }, 00:26:16.969 "method": "bdev_nvme_attach_controller" 00:26:16.969 } 00:26:16.969 EOF 00:26:16.969 )") 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.969 { 00:26:16.969 "params": { 00:26:16.969 "name": "Nvme$subsystem", 00:26:16.969 "trtype": "$TEST_TRANSPORT", 00:26:16.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.969 "adrfam": "ipv4", 00:26:16.969 "trsvcid": "$NVMF_PORT", 00:26:16.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.969 "hdgst": ${hdgst:-false}, 00:26:16.969 "ddgst": ${ddgst:-false} 00:26:16.969 }, 00:26:16.969 "method": "bdev_nvme_attach_controller" 00:26:16.969 } 00:26:16.969 EOF 00:26:16.969 )") 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.969 [2024-11-20 14:45:28.814890] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:26:16.969 [2024-11-20 14:45:28.814938] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.969 { 00:26:16.969 "params": { 00:26:16.969 "name": "Nvme$subsystem", 00:26:16.969 "trtype": "$TEST_TRANSPORT", 00:26:16.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.969 "adrfam": "ipv4", 00:26:16.969 "trsvcid": "$NVMF_PORT", 00:26:16.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.969 "hdgst": ${hdgst:-false}, 00:26:16.969 "ddgst": ${ddgst:-false} 00:26:16.969 }, 00:26:16.969 "method": "bdev_nvme_attach_controller" 00:26:16.969 } 00:26:16.969 EOF 00:26:16.969 )") 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.969 { 00:26:16.969 "params": { 00:26:16.969 "name": "Nvme$subsystem", 00:26:16.969 "trtype": "$TEST_TRANSPORT", 00:26:16.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.969 "adrfam": "ipv4", 00:26:16.969 "trsvcid": "$NVMF_PORT", 00:26:16.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.969 "hdgst": ${hdgst:-false}, 00:26:16.969 "ddgst": ${ddgst:-false} 00:26:16.969 }, 00:26:16.969 "method": "bdev_nvme_attach_controller" 00:26:16.969 } 00:26:16.969 EOF 00:26:16.969 )") 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:16.969 { 00:26:16.969 "params": { 00:26:16.969 "name": "Nvme$subsystem", 00:26:16.969 "trtype": "$TEST_TRANSPORT", 00:26:16.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.969 "adrfam": "ipv4", 00:26:16.969 "trsvcid": "$NVMF_PORT", 00:26:16.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.969 "hdgst": ${hdgst:-false}, 00:26:16.969 "ddgst": ${ddgst:-false} 00:26:16.969 }, 00:26:16.969 "method": "bdev_nvme_attach_controller" 00:26:16.969 } 00:26:16.969 EOF 00:26:16.969 )") 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:26:16.969 14:45:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:16.969 "params": { 00:26:16.970 "name": "Nvme1", 00:26:16.970 "trtype": "tcp", 00:26:16.970 "traddr": "10.0.0.2", 00:26:16.970 "adrfam": "ipv4", 00:26:16.970 "trsvcid": "4420", 00:26:16.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:16.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:16.970 "hdgst": false, 00:26:16.970 "ddgst": false 00:26:16.970 }, 00:26:16.970 "method": "bdev_nvme_attach_controller" 00:26:16.970 },{ 00:26:16.970 "params": { 00:26:16.970 "name": "Nvme2", 00:26:16.970 "trtype": "tcp", 00:26:16.970 "traddr": "10.0.0.2", 00:26:16.970 "adrfam": "ipv4", 00:26:16.970 "trsvcid": "4420", 00:26:16.970 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:16.970 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:16.970 "hdgst": false, 00:26:16.970 "ddgst": false 00:26:16.970 }, 00:26:16.970 "method": "bdev_nvme_attach_controller" 00:26:16.970 },{ 00:26:16.970 "params": { 00:26:16.970 "name": "Nvme3", 00:26:16.970 "trtype": "tcp", 00:26:16.970 "traddr": "10.0.0.2", 00:26:16.970 "adrfam": "ipv4", 00:26:16.970 "trsvcid": "4420", 00:26:16.970 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:16.970 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:16.970 "hdgst": false, 00:26:16.970 "ddgst": false 00:26:16.970 }, 00:26:16.970 "method": "bdev_nvme_attach_controller" 00:26:16.970 },{ 00:26:16.970 "params": { 00:26:16.970 "name": "Nvme4", 00:26:16.970 "trtype": "tcp", 00:26:16.970 "traddr": "10.0.0.2", 00:26:16.970 "adrfam": "ipv4", 00:26:16.970 "trsvcid": "4420", 00:26:16.970 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:16.970 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:16.970 "hdgst": false, 00:26:16.970 "ddgst": false 00:26:16.970 }, 00:26:16.970 "method": "bdev_nvme_attach_controller" 00:26:16.970 },{ 00:26:16.970 "params": { 00:26:16.970 "name": "Nvme5", 00:26:16.970 "trtype": "tcp", 00:26:16.970 "traddr": "10.0.0.2", 00:26:16.970 "adrfam": "ipv4", 00:26:16.970 "trsvcid": "4420", 00:26:16.970 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:16.970 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:16.970 "hdgst": false, 00:26:16.970 "ddgst": false 00:26:16.970 }, 00:26:16.970 "method": "bdev_nvme_attach_controller" 00:26:16.970 },{ 00:26:16.970 "params": { 00:26:16.970 "name": "Nvme6", 00:26:16.970 "trtype": "tcp", 00:26:16.970 "traddr": "10.0.0.2", 00:26:16.970 "adrfam": "ipv4", 00:26:16.970 "trsvcid": "4420", 00:26:16.970 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:16.970 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:16.970 "hdgst": false, 00:26:16.970 "ddgst": false 00:26:16.970 }, 00:26:16.970 "method": "bdev_nvme_attach_controller" 00:26:16.970 },{ 00:26:16.970 "params": { 00:26:16.970 "name": "Nvme7", 00:26:16.970 "trtype": "tcp", 00:26:16.970 "traddr": "10.0.0.2", 00:26:16.970 "adrfam": "ipv4", 00:26:16.970 "trsvcid": "4420", 00:26:16.970 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:16.970 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:16.970 "hdgst": false, 00:26:16.970 "ddgst": false 00:26:16.970 }, 00:26:16.970 "method": "bdev_nvme_attach_controller" 00:26:16.970 },{ 00:26:16.970 "params": { 00:26:16.970 "name": "Nvme8", 00:26:16.970 "trtype": "tcp", 00:26:16.970 "traddr": "10.0.0.2", 00:26:16.970 "adrfam": "ipv4", 00:26:16.970 "trsvcid": "4420", 00:26:16.970 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:16.970 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:16.970 "hdgst": false, 00:26:16.970 "ddgst": false 00:26:16.970 }, 00:26:16.970 "method": "bdev_nvme_attach_controller" 00:26:16.970 },{ 00:26:16.970 "params": { 00:26:16.970 "name": "Nvme9", 00:26:16.970 "trtype": "tcp", 00:26:16.970 "traddr": "10.0.0.2", 00:26:16.970 "adrfam": "ipv4", 00:26:16.970 "trsvcid": "4420", 00:26:16.970 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:16.970 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:16.970 "hdgst": false, 00:26:16.970 "ddgst": false 00:26:16.970 }, 00:26:16.970 "method": "bdev_nvme_attach_controller" 00:26:16.970 },{ 00:26:16.970 "params": { 00:26:16.970 "name": "Nvme10", 00:26:16.970 "trtype": "tcp", 00:26:16.970 "traddr": "10.0.0.2", 00:26:16.970 "adrfam": "ipv4", 00:26:16.970 "trsvcid": "4420", 00:26:16.970 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:16.970 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:16.970 "hdgst": false, 00:26:16.970 "ddgst": false 00:26:16.970 }, 00:26:16.970 "method": "bdev_nvme_attach_controller" 00:26:16.970 }' 00:26:16.970 [2024-11-20 14:45:28.894765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.229 [2024-11-20 14:45:28.936920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.132 14:45:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:19.132 14:45:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:26:19.132 14:45:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:19.132 14:45:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.132 14:45:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:19.132 14:45:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.132 14:45:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1653050 00:26:19.132 14:45:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:26:19.132 14:45:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:26:20.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1653050 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:20.071 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1652988 00:26:20.071 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:20.071 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:20.071 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:26:20.071 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:26:20.071 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:20.071 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:20.071 { 00:26:20.071 "params": { 00:26:20.071 "name": "Nvme$subsystem", 00:26:20.071 "trtype": "$TEST_TRANSPORT", 00:26:20.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.071 "adrfam": "ipv4", 00:26:20.071 "trsvcid": "$NVMF_PORT", 00:26:20.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.071 "hdgst": ${hdgst:-false}, 00:26:20.071 "ddgst": ${ddgst:-false} 00:26:20.071 }, 00:26:20.071 "method": "bdev_nvme_attach_controller" 00:26:20.071 } 00:26:20.071 EOF 00:26:20.071 )") 00:26:20.071 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:20.071 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:20.071 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:20.071 { 00:26:20.071 "params": { 00:26:20.071 "name": "Nvme$subsystem", 00:26:20.071 "trtype": "$TEST_TRANSPORT", 00:26:20.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.071 "adrfam": "ipv4", 00:26:20.071 "trsvcid": "$NVMF_PORT", 00:26:20.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.071 "hdgst": ${hdgst:-false}, 00:26:20.071 "ddgst": ${ddgst:-false} 00:26:20.071 }, 00:26:20.071 "method": "bdev_nvme_attach_controller" 00:26:20.071 } 00:26:20.071 EOF 00:26:20.071 )") 00:26:20.071 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:20.071 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:20.071 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:20.071 { 00:26:20.071 "params": { 00:26:20.071 "name": "Nvme$subsystem", 00:26:20.071 "trtype": "$TEST_TRANSPORT", 00:26:20.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.071 "adrfam": "ipv4", 00:26:20.071 "trsvcid": "$NVMF_PORT", 00:26:20.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.072 "hdgst": ${hdgst:-false}, 00:26:20.072 "ddgst": ${ddgst:-false} 00:26:20.072 }, 00:26:20.072 "method": "bdev_nvme_attach_controller" 00:26:20.072 } 00:26:20.072 EOF 00:26:20.072 )") 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:20.072 { 00:26:20.072 "params": { 00:26:20.072 "name": "Nvme$subsystem", 00:26:20.072 "trtype": "$TEST_TRANSPORT", 00:26:20.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.072 "adrfam": "ipv4", 00:26:20.072 "trsvcid": "$NVMF_PORT", 00:26:20.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.072 "hdgst": ${hdgst:-false}, 00:26:20.072 "ddgst": ${ddgst:-false} 00:26:20.072 }, 00:26:20.072 "method": "bdev_nvme_attach_controller" 00:26:20.072 } 00:26:20.072 EOF 00:26:20.072 )") 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:20.072 { 00:26:20.072 "params": { 00:26:20.072 "name": "Nvme$subsystem", 00:26:20.072 "trtype": "$TEST_TRANSPORT", 00:26:20.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.072 "adrfam": "ipv4", 00:26:20.072 "trsvcid": "$NVMF_PORT", 00:26:20.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.072 "hdgst": ${hdgst:-false}, 00:26:20.072 "ddgst": ${ddgst:-false} 00:26:20.072 }, 00:26:20.072 "method": "bdev_nvme_attach_controller" 00:26:20.072 } 00:26:20.072 EOF 00:26:20.072 )") 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:20.072 { 00:26:20.072 "params": { 00:26:20.072 "name": "Nvme$subsystem", 00:26:20.072 "trtype": "$TEST_TRANSPORT", 00:26:20.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.072 "adrfam": "ipv4", 00:26:20.072 "trsvcid": "$NVMF_PORT", 00:26:20.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.072 "hdgst": ${hdgst:-false}, 00:26:20.072 "ddgst": ${ddgst:-false} 00:26:20.072 }, 00:26:20.072 "method": "bdev_nvme_attach_controller" 00:26:20.072 } 00:26:20.072 EOF 00:26:20.072 )") 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:20.072 { 00:26:20.072 "params": { 00:26:20.072 "name": "Nvme$subsystem", 00:26:20.072 "trtype": "$TEST_TRANSPORT", 00:26:20.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.072 "adrfam": "ipv4", 00:26:20.072 "trsvcid": "$NVMF_PORT", 00:26:20.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.072 "hdgst": ${hdgst:-false}, 00:26:20.072 "ddgst": ${ddgst:-false} 00:26:20.072 }, 00:26:20.072 "method": "bdev_nvme_attach_controller" 00:26:20.072 } 00:26:20.072 EOF 00:26:20.072 )") 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:20.072 [2024-11-20 14:45:31.755847] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:26:20.072 [2024-11-20 14:45:31.755895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653536 ] 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:20.072 { 00:26:20.072 "params": { 00:26:20.072 "name": "Nvme$subsystem", 00:26:20.072 "trtype": "$TEST_TRANSPORT", 00:26:20.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.072 "adrfam": "ipv4", 00:26:20.072 "trsvcid": "$NVMF_PORT", 00:26:20.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.072 "hdgst": ${hdgst:-false}, 00:26:20.072 "ddgst": ${ddgst:-false} 00:26:20.072 }, 00:26:20.072 "method": "bdev_nvme_attach_controller" 00:26:20.072 } 00:26:20.072 EOF 00:26:20.072 )") 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:20.072 { 00:26:20.072 "params": { 00:26:20.072 "name": "Nvme$subsystem", 00:26:20.072 "trtype": "$TEST_TRANSPORT", 00:26:20.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.072 "adrfam": "ipv4", 00:26:20.072 "trsvcid": "$NVMF_PORT", 00:26:20.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.072 "hdgst": ${hdgst:-false}, 00:26:20.072 "ddgst": ${ddgst:-false} 00:26:20.072 }, 00:26:20.072 "method": "bdev_nvme_attach_controller" 00:26:20.072 } 00:26:20.072 EOF 00:26:20.072 )") 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:20.072 { 00:26:20.072 "params": { 00:26:20.072 "name": "Nvme$subsystem", 00:26:20.072 "trtype": "$TEST_TRANSPORT", 00:26:20.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.072 "adrfam": "ipv4", 00:26:20.072 "trsvcid": "$NVMF_PORT", 00:26:20.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.072 "hdgst": ${hdgst:-false}, 00:26:20.072 "ddgst": ${ddgst:-false} 00:26:20.072 }, 00:26:20.072 "method": "bdev_nvme_attach_controller" 00:26:20.072 } 00:26:20.072 EOF 00:26:20.072 )") 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:26:20.072 14:45:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:20.072 "params": { 00:26:20.072 "name": "Nvme1", 00:26:20.072 "trtype": "tcp", 00:26:20.072 "traddr": "10.0.0.2", 00:26:20.072 "adrfam": "ipv4", 00:26:20.072 "trsvcid": "4420", 00:26:20.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:20.072 "hdgst": false, 00:26:20.072 "ddgst": false 00:26:20.072 }, 00:26:20.072 "method": "bdev_nvme_attach_controller" 00:26:20.072 },{ 00:26:20.072 "params": { 00:26:20.072 "name": "Nvme2", 00:26:20.072 "trtype": "tcp", 00:26:20.072 "traddr": "10.0.0.2", 00:26:20.072 "adrfam": "ipv4", 00:26:20.072 "trsvcid": "4420", 00:26:20.072 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:20.072 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:20.072 "hdgst": false, 00:26:20.072 "ddgst": false 00:26:20.072 }, 00:26:20.072 "method": "bdev_nvme_attach_controller" 00:26:20.073 },{ 00:26:20.073 "params": { 00:26:20.073 "name": "Nvme3", 00:26:20.073 "trtype": "tcp", 00:26:20.073 "traddr": "10.0.0.2", 00:26:20.073 "adrfam": "ipv4", 00:26:20.073 "trsvcid": "4420", 00:26:20.073 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:20.073 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:20.073 "hdgst": false, 00:26:20.073 "ddgst": false 00:26:20.073 }, 00:26:20.073 "method": "bdev_nvme_attach_controller" 00:26:20.073 },{ 00:26:20.073 "params": { 00:26:20.073 "name": "Nvme4", 00:26:20.073 "trtype": "tcp", 00:26:20.073 "traddr": "10.0.0.2", 00:26:20.073 "adrfam": "ipv4", 00:26:20.073 "trsvcid": "4420", 00:26:20.073 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:20.073 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:20.073 "hdgst": false, 00:26:20.073 "ddgst": false 00:26:20.073 }, 00:26:20.073 "method": "bdev_nvme_attach_controller" 00:26:20.073 },{ 00:26:20.073 "params": { 00:26:20.073 "name": "Nvme5", 00:26:20.073 "trtype": "tcp", 00:26:20.073 "traddr": "10.0.0.2", 00:26:20.073 "adrfam": "ipv4", 00:26:20.073 "trsvcid": "4420", 00:26:20.073 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:20.073 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:20.073 "hdgst": false, 00:26:20.073 "ddgst": false 00:26:20.073 }, 00:26:20.073 "method": "bdev_nvme_attach_controller" 00:26:20.073 },{ 00:26:20.073 "params": { 00:26:20.073 "name": "Nvme6", 00:26:20.073 "trtype": "tcp", 00:26:20.073 "traddr": "10.0.0.2", 00:26:20.073 "adrfam": "ipv4", 00:26:20.073 "trsvcid": "4420", 00:26:20.073 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:20.073 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:20.073 "hdgst": false, 00:26:20.073 "ddgst": false 00:26:20.073 }, 00:26:20.073 "method": "bdev_nvme_attach_controller" 00:26:20.073 },{ 00:26:20.073 "params": { 00:26:20.073 "name": "Nvme7", 00:26:20.073 "trtype": "tcp", 00:26:20.073 "traddr": "10.0.0.2", 00:26:20.073 "adrfam": "ipv4", 00:26:20.073 "trsvcid": "4420", 00:26:20.073 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:20.073 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:20.073 "hdgst": false, 00:26:20.073 "ddgst": false 00:26:20.073 }, 00:26:20.073 "method": "bdev_nvme_attach_controller" 00:26:20.073 },{ 00:26:20.073 "params": { 00:26:20.073 "name": "Nvme8", 00:26:20.073 "trtype": "tcp", 00:26:20.073 "traddr": "10.0.0.2", 00:26:20.073 "adrfam": "ipv4", 00:26:20.073 "trsvcid": "4420", 00:26:20.073 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:20.073 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:20.073 "hdgst": false, 00:26:20.073 "ddgst": false 00:26:20.073 }, 00:26:20.073 "method": "bdev_nvme_attach_controller" 00:26:20.073 },{ 00:26:20.073 "params": { 00:26:20.073 "name": "Nvme9", 00:26:20.073 "trtype": "tcp", 00:26:20.073 "traddr": "10.0.0.2", 00:26:20.073 "adrfam": "ipv4", 00:26:20.073 "trsvcid": "4420", 00:26:20.073 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:20.073 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:20.073 "hdgst": false, 00:26:20.073 "ddgst": false 00:26:20.073 }, 00:26:20.073 "method": "bdev_nvme_attach_controller" 00:26:20.073 },{ 00:26:20.073 "params": { 00:26:20.073 "name": "Nvme10", 00:26:20.073 "trtype": "tcp", 00:26:20.073 "traddr": "10.0.0.2", 00:26:20.073 "adrfam": "ipv4", 00:26:20.073 "trsvcid": "4420", 00:26:20.073 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:20.073 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:20.073 "hdgst": false, 00:26:20.073 "ddgst": false 00:26:20.073 }, 00:26:20.073 "method": "bdev_nvme_attach_controller" 00:26:20.073 }' 00:26:20.073 [2024-11-20 14:45:31.836901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.073 [2024-11-20 14:45:31.878696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.983 Running I/O for 1 seconds... 00:26:22.922 2185.00 IOPS, 136.56 MiB/s 00:26:22.922 Latency(us) 00:26:22.922 [2024-11-20T13:45:34.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.922 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.922 Verification LBA range: start 0x0 length 0x400 00:26:22.922 Nvme1n1 : 1.05 242.93 15.18 0.00 0.00 260920.99 15842.62 231598.53 00:26:22.922 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.922 Verification LBA range: start 0x0 length 0x400 00:26:22.922 Nvme2n1 : 1.14 280.09 17.51 0.00 0.00 221686.21 17096.35 216097.84 00:26:22.922 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.922 Verification LBA range: start 0x0 length 0x400 00:26:22.922 Nvme3n1 : 1.13 282.32 17.65 0.00 0.00 218196.68 13962.02 226127.69 00:26:22.922 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.922 Verification LBA range: start 0x0 length 0x400 00:26:22.922 Nvme4n1 : 1.10 294.40 18.40 0.00 0.00 205255.95 5242.88 217009.64 00:26:22.922 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.922 Verification LBA range: start 0x0 length 0x400 00:26:22.922 Nvme5n1 : 1.15 278.03 17.38 0.00 0.00 214633.43 16640.45 220656.86 00:26:22.922 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.922 Verification LBA range: start 0x0 length 0x400 00:26:22.922 Nvme6n1 : 1.16 276.75 17.30 0.00 0.00 213243.81 17210.32 218833.25 00:26:22.922 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.922 Verification LBA range: start 0x0 length 0x400 00:26:22.922 Nvme7n1 : 1.15 278.92 17.43 0.00 0.00 208278.04 14588.88 231598.53 00:26:22.922 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.922 Verification LBA range: start 0x0 length 0x400 00:26:22.922 Nvme8n1 : 1.14 284.37 17.77 0.00 0.00 200402.26 4160.11 212450.62 00:26:22.922 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.922 Verification LBA range: start 0x0 length 0x400 00:26:22.922 Nvme9n1 : 1.13 226.59 14.16 0.00 0.00 248089.82 18236.10 235245.75 00:26:22.922 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:22.922 Verification LBA range: start 0x0 length 0x400 00:26:22.922 Nvme10n1 : 1.16 275.40 17.21 0.00 0.00 201180.47 13962.02 238892.97 00:26:22.922 [2024-11-20T13:45:34.880Z] =================================================================================================================== 00:26:22.922 [2024-11-20T13:45:34.880Z] Total : 2719.80 169.99 0.00 0.00 217674.54 4160.11 238892.97 00:26:23.181 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:26:23.181 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:23.181 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:23.181 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:23.181 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:23.181 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:23.181 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:26:23.181 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:23.181 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:26:23.181 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:23.181 14:45:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:23.181 rmmod nvme_tcp 00:26:23.181 rmmod nvme_fabrics 00:26:23.181 rmmod nvme_keyring 00:26:23.181 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:23.181 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:26:23.181 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:26:23.181 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1652988 ']' 00:26:23.181 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1652988 00:26:23.181 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1652988 ']' 00:26:23.181 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1652988 00:26:23.181 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:26:23.181 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:23.181 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1652988 00:26:23.181 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:23.181 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:23.181 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1652988' 00:26:23.181 killing process with pid 1652988 00:26:23.181 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1652988 00:26:23.181 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1652988 00:26:23.750 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:23.750 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:23.750 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:23.750 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:26:23.750 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:26:23.750 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:23.750 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:26:23.750 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:23.750 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:23.750 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.750 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.750 14:45:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:25.658 00:26:25.658 real 0m15.635s 00:26:25.658 user 0m35.689s 00:26:25.658 sys 0m5.850s 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:25.658 ************************************ 00:26:25.658 END TEST nvmf_shutdown_tc1 00:26:25.658 ************************************ 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:25.658 ************************************ 00:26:25.658 START TEST nvmf_shutdown_tc2 00:26:25.658 ************************************ 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:26:25.658 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:25.659 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:25.659 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:25.659 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:25.919 Found net devices under 0000:86:00.0: cvl_0_0 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:25.919 Found net devices under 0000:86:00.1: cvl_0_1 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:25.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:25.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:26:25.919 00:26:25.919 --- 10.0.0.2 ping statistics --- 00:26:25.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.919 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:25.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:25.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:26:25.919 00:26:25.919 --- 10.0.0.1 ping statistics --- 00:26:25.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.919 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:26:25.919 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:25.920 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:25.920 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:25.920 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:25.920 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:25.920 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:25.920 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:26.179 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:26.179 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:26.179 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:26.179 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.179 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1654748 00:26:26.179 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1654748 00:26:26.179 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:26.179 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1654748 ']' 00:26:26.179 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.179 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:26.179 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.179 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:26.179 14:45:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.179 [2024-11-20 14:45:37.945062] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:26:26.179 [2024-11-20 14:45:37.945105] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.179 [2024-11-20 14:45:38.026063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:26.179 [2024-11-20 14:45:38.068049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.179 [2024-11-20 14:45:38.068088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.179 [2024-11-20 14:45:38.068095] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.179 [2024-11-20 14:45:38.068102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.179 [2024-11-20 14:45:38.068110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.179 [2024-11-20 14:45:38.069571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:26.179 [2024-11-20 14:45:38.069685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:26.179 [2024-11-20 14:45:38.069792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.179 [2024-11-20 14:45:38.069793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:26.439 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:26.439 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:26.439 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:26.439 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:26.439 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.439 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.439 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:26.439 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.439 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.439 [2024-11-20 14:45:38.215231] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.439 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.439 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:26.439 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:26.439 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:26.439 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.439 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:26.439 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.440 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.440 Malloc1 00:26:26.440 [2024-11-20 14:45:38.319746] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.440 Malloc2 00:26:26.440 Malloc3 00:26:26.699 Malloc4 00:26:26.699 Malloc5 00:26:26.699 Malloc6 00:26:26.699 Malloc7 00:26:26.699 Malloc8 00:26:26.699 Malloc9 00:26:26.960 Malloc10 00:26:26.960 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.960 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:26.960 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:26.960 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.960 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1654805 00:26:26.960 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1654805 /var/tmp/bdevperf.sock 00:26:26.960 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1654805 ']' 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:26.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.961 { 00:26:26.961 "params": { 00:26:26.961 "name": "Nvme$subsystem", 00:26:26.961 "trtype": "$TEST_TRANSPORT", 00:26:26.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.961 "adrfam": "ipv4", 00:26:26.961 "trsvcid": "$NVMF_PORT", 00:26:26.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.961 "hdgst": ${hdgst:-false}, 00:26:26.961 "ddgst": ${ddgst:-false} 00:26:26.961 }, 00:26:26.961 "method": "bdev_nvme_attach_controller" 00:26:26.961 } 00:26:26.961 EOF 00:26:26.961 )") 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.961 { 00:26:26.961 "params": { 00:26:26.961 "name": "Nvme$subsystem", 00:26:26.961 "trtype": "$TEST_TRANSPORT", 00:26:26.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.961 "adrfam": "ipv4", 00:26:26.961 "trsvcid": "$NVMF_PORT", 00:26:26.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.961 "hdgst": ${hdgst:-false}, 00:26:26.961 "ddgst": ${ddgst:-false} 00:26:26.961 }, 00:26:26.961 "method": "bdev_nvme_attach_controller" 00:26:26.961 } 00:26:26.961 EOF 00:26:26.961 )") 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.961 { 00:26:26.961 "params": { 00:26:26.961 "name": "Nvme$subsystem", 00:26:26.961 "trtype": "$TEST_TRANSPORT", 00:26:26.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.961 "adrfam": "ipv4", 00:26:26.961 "trsvcid": "$NVMF_PORT", 00:26:26.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.961 "hdgst": ${hdgst:-false}, 00:26:26.961 "ddgst": ${ddgst:-false} 00:26:26.961 }, 00:26:26.961 "method": "bdev_nvme_attach_controller" 00:26:26.961 } 00:26:26.961 EOF 00:26:26.961 )") 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.961 { 00:26:26.961 "params": { 00:26:26.961 "name": "Nvme$subsystem", 00:26:26.961 "trtype": "$TEST_TRANSPORT", 00:26:26.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.961 "adrfam": "ipv4", 00:26:26.961 "trsvcid": "$NVMF_PORT", 00:26:26.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.961 "hdgst": ${hdgst:-false}, 00:26:26.961 "ddgst": ${ddgst:-false} 00:26:26.961 }, 00:26:26.961 "method": "bdev_nvme_attach_controller" 00:26:26.961 } 00:26:26.961 EOF 00:26:26.961 )") 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.961 { 00:26:26.961 "params": { 00:26:26.961 "name": "Nvme$subsystem", 00:26:26.961 "trtype": "$TEST_TRANSPORT", 00:26:26.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.961 "adrfam": "ipv4", 00:26:26.961 "trsvcid": "$NVMF_PORT", 00:26:26.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.961 "hdgst": ${hdgst:-false}, 00:26:26.961 "ddgst": ${ddgst:-false} 00:26:26.961 }, 00:26:26.961 "method": "bdev_nvme_attach_controller" 00:26:26.961 } 00:26:26.961 EOF 00:26:26.961 )") 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.961 { 00:26:26.961 "params": { 00:26:26.961 "name": "Nvme$subsystem", 00:26:26.961 "trtype": "$TEST_TRANSPORT", 00:26:26.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.961 "adrfam": "ipv4", 00:26:26.961 "trsvcid": "$NVMF_PORT", 00:26:26.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.961 "hdgst": ${hdgst:-false}, 00:26:26.961 "ddgst": ${ddgst:-false} 00:26:26.961 }, 00:26:26.961 "method": "bdev_nvme_attach_controller" 00:26:26.961 } 00:26:26.961 EOF 00:26:26.961 )") 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.961 { 00:26:26.961 "params": { 00:26:26.961 "name": "Nvme$subsystem", 00:26:26.961 "trtype": "$TEST_TRANSPORT", 00:26:26.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.961 "adrfam": "ipv4", 00:26:26.961 "trsvcid": "$NVMF_PORT", 00:26:26.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.961 "hdgst": ${hdgst:-false}, 00:26:26.961 "ddgst": ${ddgst:-false} 00:26:26.961 }, 00:26:26.961 "method": "bdev_nvme_attach_controller" 00:26:26.961 } 00:26:26.961 EOF 00:26:26.961 )") 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:26.961 [2024-11-20 14:45:38.793013] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:26:26.961 [2024-11-20 14:45:38.793059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1654805 ] 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.961 { 00:26:26.961 "params": { 00:26:26.961 "name": "Nvme$subsystem", 00:26:26.961 "trtype": "$TEST_TRANSPORT", 00:26:26.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.961 "adrfam": "ipv4", 00:26:26.961 "trsvcid": "$NVMF_PORT", 00:26:26.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.961 "hdgst": ${hdgst:-false}, 00:26:26.961 "ddgst": ${ddgst:-false} 00:26:26.961 }, 00:26:26.961 "method": "bdev_nvme_attach_controller" 00:26:26.961 } 00:26:26.961 EOF 00:26:26.961 )") 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.961 { 00:26:26.961 "params": { 00:26:26.961 "name": "Nvme$subsystem", 00:26:26.961 "trtype": "$TEST_TRANSPORT", 00:26:26.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.961 "adrfam": "ipv4", 00:26:26.961 "trsvcid": "$NVMF_PORT", 00:26:26.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.961 "hdgst": ${hdgst:-false}, 00:26:26.961 "ddgst": ${ddgst:-false} 00:26:26.961 }, 00:26:26.961 "method": "bdev_nvme_attach_controller" 00:26:26.961 } 00:26:26.961 EOF 00:26:26.961 )") 00:26:26.961 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:26.962 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.962 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.962 { 00:26:26.962 "params": { 00:26:26.962 "name": "Nvme$subsystem", 00:26:26.962 "trtype": "$TEST_TRANSPORT", 00:26:26.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.962 "adrfam": "ipv4", 00:26:26.962 "trsvcid": "$NVMF_PORT", 00:26:26.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.962 "hdgst": ${hdgst:-false}, 00:26:26.962 "ddgst": ${ddgst:-false} 00:26:26.962 }, 00:26:26.962 "method": "bdev_nvme_attach_controller" 00:26:26.962 } 00:26:26.962 EOF 00:26:26.962 )") 00:26:26.962 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:26.962 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:26:26.962 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:26:26.962 14:45:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:26.962 "params": { 00:26:26.962 "name": "Nvme1", 00:26:26.962 "trtype": "tcp", 00:26:26.962 "traddr": "10.0.0.2", 00:26:26.962 "adrfam": "ipv4", 00:26:26.962 "trsvcid": "4420", 00:26:26.962 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:26.962 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:26.962 "hdgst": false, 00:26:26.962 "ddgst": false 00:26:26.962 }, 00:26:26.962 "method": "bdev_nvme_attach_controller" 00:26:26.962 },{ 00:26:26.962 "params": { 00:26:26.962 "name": "Nvme2", 00:26:26.962 "trtype": "tcp", 00:26:26.962 "traddr": "10.0.0.2", 00:26:26.962 "adrfam": "ipv4", 00:26:26.962 "trsvcid": "4420", 00:26:26.962 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:26.962 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:26.962 "hdgst": false, 00:26:26.962 "ddgst": false 00:26:26.962 }, 00:26:26.962 "method": "bdev_nvme_attach_controller" 00:26:26.962 },{ 00:26:26.962 "params": { 00:26:26.962 "name": "Nvme3", 00:26:26.962 "trtype": "tcp", 00:26:26.962 "traddr": "10.0.0.2", 00:26:26.962 "adrfam": "ipv4", 00:26:26.962 "trsvcid": "4420", 00:26:26.962 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:26.962 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:26.962 "hdgst": false, 00:26:26.962 "ddgst": false 00:26:26.962 }, 00:26:26.962 "method": "bdev_nvme_attach_controller" 00:26:26.962 },{ 00:26:26.962 "params": { 00:26:26.962 "name": "Nvme4", 00:26:26.962 "trtype": "tcp", 00:26:26.962 "traddr": "10.0.0.2", 00:26:26.962 "adrfam": "ipv4", 00:26:26.962 "trsvcid": "4420", 00:26:26.962 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:26.962 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:26.962 "hdgst": false, 00:26:26.962 "ddgst": false 00:26:26.962 }, 00:26:26.962 "method": "bdev_nvme_attach_controller" 00:26:26.962 },{ 00:26:26.962 "params": { 00:26:26.962 "name": "Nvme5", 00:26:26.962 "trtype": "tcp", 00:26:26.962 "traddr": "10.0.0.2", 00:26:26.962 "adrfam": "ipv4", 00:26:26.962 "trsvcid": "4420", 00:26:26.962 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:26.962 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:26.962 "hdgst": false, 00:26:26.962 "ddgst": false 00:26:26.962 }, 00:26:26.962 "method": "bdev_nvme_attach_controller" 00:26:26.962 },{ 00:26:26.962 "params": { 00:26:26.962 "name": "Nvme6", 00:26:26.962 "trtype": "tcp", 00:26:26.962 "traddr": "10.0.0.2", 00:26:26.962 "adrfam": "ipv4", 00:26:26.962 "trsvcid": "4420", 00:26:26.962 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:26.962 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:26.962 "hdgst": false, 00:26:26.962 "ddgst": false 00:26:26.962 }, 00:26:26.962 "method": "bdev_nvme_attach_controller" 00:26:26.962 },{ 00:26:26.962 "params": { 00:26:26.962 "name": "Nvme7", 00:26:26.962 "trtype": "tcp", 00:26:26.962 "traddr": "10.0.0.2", 00:26:26.962 "adrfam": "ipv4", 00:26:26.962 "trsvcid": "4420", 00:26:26.962 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:26.962 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:26.962 "hdgst": false, 00:26:26.962 "ddgst": false 00:26:26.962 }, 00:26:26.962 "method": "bdev_nvme_attach_controller" 00:26:26.962 },{ 00:26:26.962 "params": { 00:26:26.962 "name": "Nvme8", 00:26:26.962 "trtype": "tcp", 00:26:26.962 "traddr": "10.0.0.2", 00:26:26.962 "adrfam": "ipv4", 00:26:26.962 "trsvcid": "4420", 00:26:26.962 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:26.962 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:26.962 "hdgst": false, 00:26:26.962 "ddgst": false 00:26:26.962 }, 00:26:26.962 "method": "bdev_nvme_attach_controller" 00:26:26.962 },{ 00:26:26.962 "params": { 00:26:26.962 "name": "Nvme9", 00:26:26.962 "trtype": "tcp", 00:26:26.962 "traddr": "10.0.0.2", 00:26:26.962 "adrfam": "ipv4", 00:26:26.962 "trsvcid": "4420", 00:26:26.962 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:26.962 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:26.962 "hdgst": false, 00:26:26.962 "ddgst": false 00:26:26.962 }, 00:26:26.962 "method": "bdev_nvme_attach_controller" 00:26:26.962 },{ 00:26:26.962 "params": { 00:26:26.962 "name": "Nvme10", 00:26:26.962 "trtype": "tcp", 00:26:26.962 "traddr": "10.0.0.2", 00:26:26.962 "adrfam": "ipv4", 00:26:26.962 "trsvcid": "4420", 00:26:26.962 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:26.962 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:26.962 "hdgst": false, 00:26:26.962 "ddgst": false 00:26:26.962 }, 00:26:26.962 "method": "bdev_nvme_attach_controller" 00:26:26.962 }' 00:26:26.962 [2024-11-20 14:45:38.868162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.962 [2024-11-20 14:45:38.910343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.867 Running I/O for 10 seconds... 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=74 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 74 -ge 100 ']' 00:26:28.867 14:45:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:29.126 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:29.126 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:29.126 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:29.126 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:29.126 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.126 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.126 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.126 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=149 00:26:29.126 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 149 -ge 100 ']' 00:26:29.126 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:26:29.126 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:26:29.126 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:26:29.126 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1654805 00:26:29.126 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1654805 ']' 00:26:29.126 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1654805 00:26:29.126 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:26:29.126 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:29.126 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1654805 00:26:29.385 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:29.385 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:29.385 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1654805' 00:26:29.385 killing process with pid 1654805 00:26:29.385 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1654805 00:26:29.385 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1654805 00:26:29.385 Received shutdown signal, test time was about 0.849792 seconds 00:26:29.385 00:26:29.385 Latency(us) 00:26:29.385 [2024-11-20T13:45:41.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.385 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.385 Verification LBA range: start 0x0 length 0x400 00:26:29.385 Nvme1n1 : 0.84 312.44 19.53 0.00 0.00 201480.95 6268.66 208803.39 00:26:29.385 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.385 Verification LBA range: start 0x0 length 0x400 00:26:29.385 Nvme2n1 : 0.81 235.77 14.74 0.00 0.00 262856.35 19033.93 219745.06 00:26:29.385 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.385 Verification LBA range: start 0x0 length 0x400 00:26:29.385 Nvme3n1 : 0.84 306.12 19.13 0.00 0.00 198484.15 25416.57 212450.62 00:26:29.385 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.385 Verification LBA range: start 0x0 length 0x400 00:26:29.385 Nvme4n1 : 0.84 310.45 19.40 0.00 0.00 191531.25 2179.78 235245.75 00:26:29.385 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.385 Verification LBA range: start 0x0 length 0x400 00:26:29.385 Nvme5n1 : 0.82 234.22 14.64 0.00 0.00 248729.90 17894.18 210627.01 00:26:29.385 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.385 Verification LBA range: start 0x0 length 0x400 00:26:29.385 Nvme6n1 : 0.83 232.72 14.55 0.00 0.00 245100.48 20401.64 235245.75 00:26:29.385 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.385 Verification LBA range: start 0x0 length 0x400 00:26:29.385 Nvme7n1 : 0.85 301.51 18.84 0.00 0.00 185291.02 15614.66 210627.01 00:26:29.385 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.385 Verification LBA range: start 0x0 length 0x400 00:26:29.385 Nvme8n1 : 0.85 301.86 18.87 0.00 0.00 181572.79 14816.83 225215.89 00:26:29.385 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.385 Verification LBA range: start 0x0 length 0x400 00:26:29.385 Nvme9n1 : 0.83 232.15 14.51 0.00 0.00 229698.34 23706.94 228863.11 00:26:29.385 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.385 Verification LBA range: start 0x0 length 0x400 00:26:29.385 Nvme10n1 : 0.83 235.39 14.71 0.00 0.00 221224.92 3319.54 246187.41 00:26:29.385 [2024-11-20T13:45:41.343Z] =================================================================================================================== 00:26:29.385 [2024-11-20T13:45:41.343Z] Total : 2702.63 168.91 0.00 0.00 212967.30 2179.78 246187.41 00:26:29.644 14:45:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1654748 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:30.582 rmmod nvme_tcp 00:26:30.582 rmmod nvme_fabrics 00:26:30.582 rmmod nvme_keyring 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1654748 ']' 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1654748 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1654748 ']' 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1654748 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1654748 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1654748' 00:26:30.582 killing process with pid 1654748 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1654748 00:26:30.582 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1654748 00:26:31.151 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:31.151 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:31.151 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:31.151 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:26:31.151 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:26:31.151 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:31.151 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:26:31.151 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:31.151 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:31.151 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.151 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.152 14:45:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:33.199 00:26:33.199 real 0m7.342s 00:26:33.199 user 0m21.626s 00:26:33.199 sys 0m1.313s 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:33.199 ************************************ 00:26:33.199 END TEST nvmf_shutdown_tc2 00:26:33.199 ************************************ 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:33.199 ************************************ 00:26:33.199 START TEST nvmf_shutdown_tc3 00:26:33.199 ************************************ 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:33.199 14:45:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:33.199 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:33.199 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:33.199 Found net devices under 0000:86:00.0: cvl_0_0 00:26:33.199 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:33.200 Found net devices under 0000:86:00.1: cvl_0_1 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:33.200 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:33.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:33.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:26:33.459 00:26:33.459 --- 10.0.0.2 ping statistics --- 00:26:33.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.459 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:33.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:26:33.459 00:26:33.459 --- 10.0.0.1 ping statistics --- 00:26:33.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.459 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1656053 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1656053 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1656053 ']' 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:33.459 14:45:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:33.459 [2024-11-20 14:45:45.350975] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:26:33.459 [2024-11-20 14:45:45.351027] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.721 [2024-11-20 14:45:45.431814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:33.721 [2024-11-20 14:45:45.474376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.721 [2024-11-20 14:45:45.474413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.721 [2024-11-20 14:45:45.474420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:33.721 [2024-11-20 14:45:45.474426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:33.721 [2024-11-20 14:45:45.474432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.721 [2024-11-20 14:45:45.476065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.721 [2024-11-20 14:45:45.476174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:33.721 [2024-11-20 14:45:45.476280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.721 [2024-11-20 14:45:45.476280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:34.289 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:34.289 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:26:34.289 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:34.289 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:34.289 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:34.289 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:34.289 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:34.289 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.289 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:34.289 [2024-11-20 14:45:46.239733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.289 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.549 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:34.549 Malloc1 00:26:34.549 [2024-11-20 14:45:46.360093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:34.549 Malloc2 00:26:34.549 Malloc3 00:26:34.549 Malloc4 00:26:34.808 Malloc5 00:26:34.808 Malloc6 00:26:34.808 Malloc7 00:26:34.808 Malloc8 00:26:34.808 Malloc9 00:26:34.808 Malloc10 00:26:34.808 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.808 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:34.808 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:34.808 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:35.067 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1656329 00:26:35.067 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1656329 /var/tmp/bdevperf.sock 00:26:35.067 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1656329 ']' 00:26:35.067 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:35.067 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:26:35.067 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:35.067 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:26:35.067 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:35.067 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:35.067 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:35.067 { 00:26:35.067 "params": { 00:26:35.067 "name": "Nvme$subsystem", 00:26:35.067 "trtype": "$TEST_TRANSPORT", 00:26:35.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.067 "adrfam": "ipv4", 00:26:35.067 "trsvcid": "$NVMF_PORT", 00:26:35.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.067 "hdgst": ${hdgst:-false}, 00:26:35.067 "ddgst": ${ddgst:-false} 00:26:35.067 }, 00:26:35.067 "method": "bdev_nvme_attach_controller" 00:26:35.067 } 00:26:35.067 EOF 00:26:35.067 )") 00:26:35.067 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:35.067 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:35.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:35.067 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:35.067 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:35.067 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:35.067 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:35.067 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:35.067 { 00:26:35.067 "params": { 00:26:35.067 "name": "Nvme$subsystem", 00:26:35.067 "trtype": "$TEST_TRANSPORT", 00:26:35.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.067 "adrfam": "ipv4", 00:26:35.067 "trsvcid": "$NVMF_PORT", 00:26:35.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.067 "hdgst": ${hdgst:-false}, 00:26:35.067 "ddgst": ${ddgst:-false} 00:26:35.067 }, 00:26:35.067 "method": "bdev_nvme_attach_controller" 00:26:35.067 } 00:26:35.067 EOF 00:26:35.067 )") 00:26:35.067 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:35.067 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:35.068 { 00:26:35.068 "params": { 00:26:35.068 "name": "Nvme$subsystem", 00:26:35.068 "trtype": "$TEST_TRANSPORT", 00:26:35.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.068 "adrfam": "ipv4", 00:26:35.068 "trsvcid": "$NVMF_PORT", 00:26:35.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.068 "hdgst": ${hdgst:-false}, 00:26:35.068 "ddgst": ${ddgst:-false} 00:26:35.068 }, 00:26:35.068 "method": "bdev_nvme_attach_controller" 00:26:35.068 } 00:26:35.068 EOF 00:26:35.068 )") 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:35.068 { 00:26:35.068 "params": { 00:26:35.068 "name": "Nvme$subsystem", 00:26:35.068 "trtype": "$TEST_TRANSPORT", 00:26:35.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.068 "adrfam": "ipv4", 00:26:35.068 "trsvcid": "$NVMF_PORT", 00:26:35.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.068 "hdgst": ${hdgst:-false}, 00:26:35.068 "ddgst": ${ddgst:-false} 00:26:35.068 }, 00:26:35.068 "method": "bdev_nvme_attach_controller" 00:26:35.068 } 00:26:35.068 EOF 00:26:35.068 )") 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:35.068 { 00:26:35.068 "params": { 00:26:35.068 "name": "Nvme$subsystem", 00:26:35.068 "trtype": "$TEST_TRANSPORT", 00:26:35.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.068 "adrfam": "ipv4", 00:26:35.068 "trsvcid": "$NVMF_PORT", 00:26:35.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.068 "hdgst": ${hdgst:-false}, 00:26:35.068 "ddgst": ${ddgst:-false} 00:26:35.068 }, 00:26:35.068 "method": "bdev_nvme_attach_controller" 00:26:35.068 } 00:26:35.068 EOF 00:26:35.068 )") 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:35.068 { 00:26:35.068 "params": { 00:26:35.068 "name": "Nvme$subsystem", 00:26:35.068 "trtype": "$TEST_TRANSPORT", 00:26:35.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.068 "adrfam": "ipv4", 00:26:35.068 "trsvcid": "$NVMF_PORT", 00:26:35.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.068 "hdgst": ${hdgst:-false}, 00:26:35.068 "ddgst": ${ddgst:-false} 00:26:35.068 }, 00:26:35.068 "method": "bdev_nvme_attach_controller" 00:26:35.068 } 00:26:35.068 EOF 00:26:35.068 )") 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:35.068 { 00:26:35.068 "params": { 00:26:35.068 "name": "Nvme$subsystem", 00:26:35.068 "trtype": "$TEST_TRANSPORT", 00:26:35.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.068 "adrfam": "ipv4", 00:26:35.068 "trsvcid": "$NVMF_PORT", 00:26:35.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.068 "hdgst": ${hdgst:-false}, 00:26:35.068 "ddgst": ${ddgst:-false} 00:26:35.068 }, 00:26:35.068 "method": "bdev_nvme_attach_controller" 00:26:35.068 } 00:26:35.068 EOF 00:26:35.068 )") 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:35.068 [2024-11-20 14:45:46.834930] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:26:35.068 [2024-11-20 14:45:46.834983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1656329 ] 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:35.068 { 00:26:35.068 "params": { 00:26:35.068 "name": "Nvme$subsystem", 00:26:35.068 "trtype": "$TEST_TRANSPORT", 00:26:35.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.068 "adrfam": "ipv4", 00:26:35.068 "trsvcid": "$NVMF_PORT", 00:26:35.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.068 "hdgst": ${hdgst:-false}, 00:26:35.068 "ddgst": ${ddgst:-false} 00:26:35.068 }, 00:26:35.068 "method": "bdev_nvme_attach_controller" 00:26:35.068 } 00:26:35.068 EOF 00:26:35.068 )") 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:35.068 { 00:26:35.068 "params": { 00:26:35.068 "name": "Nvme$subsystem", 00:26:35.068 "trtype": "$TEST_TRANSPORT", 00:26:35.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.068 "adrfam": "ipv4", 00:26:35.068 "trsvcid": "$NVMF_PORT", 00:26:35.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.068 "hdgst": ${hdgst:-false}, 00:26:35.068 "ddgst": ${ddgst:-false} 00:26:35.068 }, 00:26:35.068 "method": "bdev_nvme_attach_controller" 00:26:35.068 } 00:26:35.068 EOF 00:26:35.068 )") 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:35.068 { 00:26:35.068 "params": { 00:26:35.068 "name": "Nvme$subsystem", 00:26:35.068 "trtype": "$TEST_TRANSPORT", 00:26:35.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.068 "adrfam": "ipv4", 00:26:35.068 "trsvcid": "$NVMF_PORT", 00:26:35.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.068 "hdgst": ${hdgst:-false}, 00:26:35.068 "ddgst": ${ddgst:-false} 00:26:35.068 }, 00:26:35.068 "method": "bdev_nvme_attach_controller" 00:26:35.068 } 00:26:35.068 EOF 00:26:35.068 )") 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:26:35.068 14:45:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:35.068 "params": { 00:26:35.068 "name": "Nvme1", 00:26:35.068 "trtype": "tcp", 00:26:35.068 "traddr": "10.0.0.2", 00:26:35.068 "adrfam": "ipv4", 00:26:35.068 "trsvcid": "4420", 00:26:35.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:35.068 "hdgst": false, 00:26:35.068 "ddgst": false 00:26:35.068 }, 00:26:35.068 "method": "bdev_nvme_attach_controller" 00:26:35.068 },{ 00:26:35.068 "params": { 00:26:35.068 "name": "Nvme2", 00:26:35.068 "trtype": "tcp", 00:26:35.068 "traddr": "10.0.0.2", 00:26:35.068 "adrfam": "ipv4", 00:26:35.068 "trsvcid": "4420", 00:26:35.068 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:35.068 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:35.068 "hdgst": false, 00:26:35.068 "ddgst": false 00:26:35.068 }, 00:26:35.068 "method": "bdev_nvme_attach_controller" 00:26:35.068 },{ 00:26:35.068 "params": { 00:26:35.068 "name": "Nvme3", 00:26:35.068 "trtype": "tcp", 00:26:35.068 "traddr": "10.0.0.2", 00:26:35.068 "adrfam": "ipv4", 00:26:35.068 "trsvcid": "4420", 00:26:35.068 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:35.068 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:35.068 "hdgst": false, 00:26:35.068 "ddgst": false 00:26:35.068 }, 00:26:35.068 "method": "bdev_nvme_attach_controller" 00:26:35.068 },{ 00:26:35.068 "params": { 00:26:35.068 "name": "Nvme4", 00:26:35.068 "trtype": "tcp", 00:26:35.068 "traddr": "10.0.0.2", 00:26:35.068 "adrfam": "ipv4", 00:26:35.068 "trsvcid": "4420", 00:26:35.068 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:35.068 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:35.068 "hdgst": false, 00:26:35.068 "ddgst": false 00:26:35.068 }, 00:26:35.068 "method": "bdev_nvme_attach_controller" 00:26:35.069 },{ 00:26:35.069 "params": { 00:26:35.069 "name": "Nvme5", 00:26:35.069 "trtype": "tcp", 00:26:35.069 "traddr": "10.0.0.2", 00:26:35.069 "adrfam": "ipv4", 00:26:35.069 "trsvcid": "4420", 00:26:35.069 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:35.069 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:35.069 "hdgst": false, 00:26:35.069 "ddgst": false 00:26:35.069 }, 00:26:35.069 "method": "bdev_nvme_attach_controller" 00:26:35.069 },{ 00:26:35.069 "params": { 00:26:35.069 "name": "Nvme6", 00:26:35.069 "trtype": "tcp", 00:26:35.069 "traddr": "10.0.0.2", 00:26:35.069 "adrfam": "ipv4", 00:26:35.069 "trsvcid": "4420", 00:26:35.069 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:35.069 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:35.069 "hdgst": false, 00:26:35.069 "ddgst": false 00:26:35.069 }, 00:26:35.069 "method": "bdev_nvme_attach_controller" 00:26:35.069 },{ 00:26:35.069 "params": { 00:26:35.069 "name": "Nvme7", 00:26:35.069 "trtype": "tcp", 00:26:35.069 "traddr": "10.0.0.2", 00:26:35.069 "adrfam": "ipv4", 00:26:35.069 "trsvcid": "4420", 00:26:35.069 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:35.069 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:35.069 "hdgst": false, 00:26:35.069 "ddgst": false 00:26:35.069 }, 00:26:35.069 "method": "bdev_nvme_attach_controller" 00:26:35.069 },{ 00:26:35.069 "params": { 00:26:35.069 "name": "Nvme8", 00:26:35.069 "trtype": "tcp", 00:26:35.069 "traddr": "10.0.0.2", 00:26:35.069 "adrfam": "ipv4", 00:26:35.069 "trsvcid": "4420", 00:26:35.069 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:35.069 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:35.069 "hdgst": false, 00:26:35.069 "ddgst": false 00:26:35.069 }, 00:26:35.069 "method": "bdev_nvme_attach_controller" 00:26:35.069 },{ 00:26:35.069 "params": { 00:26:35.069 "name": "Nvme9", 00:26:35.069 "trtype": "tcp", 00:26:35.069 "traddr": "10.0.0.2", 00:26:35.069 "adrfam": "ipv4", 00:26:35.069 "trsvcid": "4420", 00:26:35.069 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:35.069 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:35.069 "hdgst": false, 00:26:35.069 "ddgst": false 00:26:35.069 }, 00:26:35.069 "method": "bdev_nvme_attach_controller" 00:26:35.069 },{ 00:26:35.069 "params": { 00:26:35.069 "name": "Nvme10", 00:26:35.069 "trtype": "tcp", 00:26:35.069 "traddr": "10.0.0.2", 00:26:35.069 "adrfam": "ipv4", 00:26:35.069 "trsvcid": "4420", 00:26:35.069 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:35.069 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:35.069 "hdgst": false, 00:26:35.069 "ddgst": false 00:26:35.069 }, 00:26:35.069 "method": "bdev_nvme_attach_controller" 00:26:35.069 }' 00:26:35.069 [2024-11-20 14:45:46.910073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.069 [2024-11-20 14:45:46.952049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.454 Running I/O for 10 seconds... 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:26:37.022 14:45:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:37.296 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:37.296 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:37.296 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:37.296 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.296 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:37.296 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:37.296 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.297 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:26:37.297 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:26:37.297 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:26:37.297 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:26:37.297 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:26:37.297 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1656053 00:26:37.297 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1656053 ']' 00:26:37.297 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1656053 00:26:37.297 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:26:37.297 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:37.297 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1656053 00:26:37.297 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:37.297 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:37.297 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1656053' 00:26:37.297 killing process with pid 1656053 00:26:37.297 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1656053 00:26:37.297 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1656053 00:26:37.297 [2024-11-20 14:45:49.158167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.158617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86850 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.161466] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:37.297 [2024-11-20 14:45:49.165219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.165298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.165307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.165314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.297 [2024-11-20 14:45:49.165321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.165692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89400 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.166756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86d20 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.166767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86d20 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.166773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86d20 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.167967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.167991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.167999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.298 [2024-11-20 14:45:49.168131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.168385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd871f0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.299 [2024-11-20 14:45:49.169672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.169678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.169686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.169692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.169698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.169704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.169710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.169716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.169722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.169728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.169734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.169740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.169746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.169752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.169758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.169764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.169769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.169777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.169783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd876e0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.170925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87bb0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.171821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.171833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.171840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.171846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.300 [2024-11-20 14:45:49.171853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.171995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.172219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd880a0 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.173203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88570 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.173744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.173757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.173763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.173770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.173777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.173783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.173789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.173797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.173803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.173809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.173816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.301 [2024-11-20 14:45:49.173822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.173997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88a40 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.302 [2024-11-20 14:45:49.174940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.174946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.174956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.174962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.174968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.174974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.174979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.174985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.174991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.174997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.175115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88f10 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.177772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.177799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.177809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.177817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.177825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.177832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.177839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.177846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.177853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1369ce0 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.177886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.177894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.177903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.177910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.177917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.177924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.177932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.177938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.177956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a3d0 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.177985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.177994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.178002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.178009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.178017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.178024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.178033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.178040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.178047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed9fe0 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.178071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.178080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.178088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.178094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.178102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.178109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.178117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.178124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.178131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309940 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.178154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.178162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.178170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.178177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.178185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.178192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.178202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.178209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.178216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310850 is same with the state(6) to be set 00:26:37.303 [2024-11-20 14:45:49.178240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.178249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.178257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.178265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.178274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.178281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.303 [2024-11-20 14:45:49.178288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.303 [2024-11-20 14:45:49.178295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.178302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5d30 is same with the state(6) to be set 00:26:37.304 [2024-11-20 14:45:49.178326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.304 [2024-11-20 14:45:49.178334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.178342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.304 [2024-11-20 14:45:49.178348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.178357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.304 [2024-11-20 14:45:49.178364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.178373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.304 [2024-11-20 14:45:49.178381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.178389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee61b0 is same with the state(6) to be set 00:26:37.304 [2024-11-20 14:45:49.178411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.304 [2024-11-20 14:45:49.178419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.178428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.304 [2024-11-20 14:45:49.178435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.178444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.304 [2024-11-20 14:45:49.178454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.178463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.304 [2024-11-20 14:45:49.178470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.178476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a5b0 is same with the state(6) to be set 00:26:37.304 [2024-11-20 14:45:49.178499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.304 [2024-11-20 14:45:49.178507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.178516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.304 [2024-11-20 14:45:49.178523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.178532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.304 [2024-11-20 14:45:49.178539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.178549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.304 [2024-11-20 14:45:49.178557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.178563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeda1e0 is same with the state(6) to be set 00:26:37.304 [2024-11-20 14:45:49.178586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.304 [2024-11-20 14:45:49.178596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.178605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.304 [2024-11-20 14:45:49.178612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.178619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.304 [2024-11-20 14:45:49.178626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.178634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.304 [2024-11-20 14:45:49.178641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.178647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311ae0 is same with the state(6) to be set 00:26:37.304 [2024-11-20 14:45:49.178968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.178990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.304 [2024-11-20 14:45:49.179331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.304 [2024-11-20 14:45:49.179340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.305 [2024-11-20 14:45:49.179886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.305 [2024-11-20 14:45:49.179893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.179902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.179909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.179918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.179924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.179932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.179940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.179953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.179960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.179969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.179977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.179986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.179994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:37.306 [2024-11-20 14:45:49.180344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.180829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.180837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.191987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.306 [2024-11-20 14:45:49.192012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.306 [2024-11-20 14:45:49.192020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.307 [2024-11-20 14:45:49.192634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.307 [2024-11-20 14:45:49.192957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1369ce0 (9): Bad file descriptor 00:26:37.307 [2024-11-20 14:45:49.192987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135a3d0 (9): Bad file descriptor 00:26:37.307 [2024-11-20 14:45:49.193005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed9fe0 (9): Bad file descriptor 00:26:37.307 [2024-11-20 14:45:49.193022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1309940 (9): Bad file descriptor 00:26:37.307 [2024-11-20 14:45:49.193040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1310850 (9): Bad file descriptor 00:26:37.307 [2024-11-20 14:45:49.193058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee5d30 (9): Bad file descriptor 00:26:37.307 [2024-11-20 14:45:49.193076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee61b0 (9): Bad file descriptor 00:26:37.307 [2024-11-20 14:45:49.193091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135a5b0 (9): Bad file descriptor 00:26:37.307 [2024-11-20 14:45:49.193105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeda1e0 (9): Bad file descriptor 00:26:37.307 [2024-11-20 14:45:49.193117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1311ae0 (9): Bad file descriptor 00:26:37.307 [2024-11-20 14:45:49.195557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:26:37.307 [2024-11-20 14:45:49.195994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:26:37.307 [2024-11-20 14:45:49.196154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.307 [2024-11-20 14:45:49.196175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeda1e0 with addr=10.0.0.2, port=4420 00:26:37.308 [2024-11-20 14:45:49.196186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeda1e0 is same with the state(6) to be set 00:26:37.308 [2024-11-20 14:45:49.197043] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:37.308 [2024-11-20 14:45:49.197115] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:37.308 [2024-11-20 14:45:49.197173] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:37.308 [2024-11-20 14:45:49.197298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.308 [2024-11-20 14:45:49.197319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1311ae0 with addr=10.0.0.2, port=4420 00:26:37.308 [2024-11-20 14:45:49.197332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311ae0 is same with the state(6) to be set 00:26:37.308 [2024-11-20 14:45:49.197349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeda1e0 (9): Bad file descriptor 00:26:37.308 [2024-11-20 14:45:49.197443] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:37.308 [2024-11-20 14:45:49.197513] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:37.308 [2024-11-20 14:45:49.197568] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:37.308 [2024-11-20 14:45:49.197640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1311ae0 (9): Bad file descriptor 00:26:37.308 [2024-11-20 14:45:49.197664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:26:37.308 [2024-11-20 14:45:49.197676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:26:37.308 [2024-11-20 14:45:49.197690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:37.308 [2024-11-20 14:45:49.197702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:26:37.308 [2024-11-20 14:45:49.197778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.197797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.197819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.197831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.197845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.197856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.197870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.197882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.197896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.197908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.197921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.197932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.197964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.197976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.197990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.308 [2024-11-20 14:45:49.198616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.308 [2024-11-20 14:45:49.198628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.198642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.198652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.198667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.198678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.198692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.198702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.198719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.198730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.198743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.198754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.198767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.198779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.198793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.198804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.198818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.198829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.198844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.198855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.198867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.198878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.198893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.198904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.198917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.198928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.198943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.198960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.198973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.198984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.198999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.199010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.199023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.199037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.199052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.199063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.199076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.199088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.199102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.199114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.199126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.199138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.199152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.199163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.199176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.199187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.199201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.199212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.199225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.199237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.199252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.199263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.199276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.199287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.199300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.199311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.199326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.199337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.199352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.199363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.199377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.199388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.199401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.199411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.199424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e48f0 is same with the state(6) to be set 00:26:37.309 [2024-11-20 14:45:49.199562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:26:37.309 [2024-11-20 14:45:49.199576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:26:37.309 [2024-11-20 14:45:49.199587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:37.309 [2024-11-20 14:45:49.199599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:26:37.309 [2024-11-20 14:45:49.201084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:26:37.309 [2024-11-20 14:45:49.201281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.309 [2024-11-20 14:45:49.201304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed9fe0 with addr=10.0.0.2, port=4420 00:26:37.309 [2024-11-20 14:45:49.201317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed9fe0 is same with the state(6) to be set 00:26:37.309 [2024-11-20 14:45:49.201701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed9fe0 (9): Bad file descriptor 00:26:37.309 [2024-11-20 14:45:49.201781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:26:37.309 [2024-11-20 14:45:49.201794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:26:37.309 [2024-11-20 14:45:49.201805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:37.309 [2024-11-20 14:45:49.201816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:26:37.309 [2024-11-20 14:45:49.203110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.309 [2024-11-20 14:45:49.203131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.309 [2024-11-20 14:45:49.203148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.203986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.203998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.204011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.310 [2024-11-20 14:45:49.204021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.310 [2024-11-20 14:45:49.204034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.204725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.204737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1412940 is same with the state(6) to be set 00:26:37.311 [2024-11-20 14:45:49.206289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.206310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.206328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.206343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.206358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.206369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.206383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.206394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.206408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.206420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.206433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.206444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.206457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.206469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.206483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.206494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.206508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.206519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.206533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.206545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.206558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.311 [2024-11-20 14:45:49.206570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.311 [2024-11-20 14:45:49.206584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.206595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.206608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.206620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.206633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.206645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.206666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.206678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.206692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.206703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.206717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.206728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.206742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.206754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.206767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.206777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.206792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.206812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.206821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.206829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.206837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.206845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.206855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.206862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.206871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.206879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.206889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.206896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.206905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.206913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.206922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.206931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.206939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.206952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.206963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.206969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.206978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.206986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.206996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.312 [2024-11-20 14:45:49.207331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.312 [2024-11-20 14:45:49.207339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.207349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.207358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.207368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.207375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.207385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.207393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.207402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.207409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.207419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.207428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.207437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.207444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.207454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.207462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.207471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.207479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.207488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.207496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.207505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.207512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.207521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.207530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.207539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.207546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.207555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.207563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.207573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d7790 is same with the state(6) to be set 00:26:37.313 [2024-11-20 14:45:49.208621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.208986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.208995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.209002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.209011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.209019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.209029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.209036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.209045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.209053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.313 [2024-11-20 14:45:49.209064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.313 [2024-11-20 14:45:49.209071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.314 [2024-11-20 14:45:49.209643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.314 [2024-11-20 14:45:49.209653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.209661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.209669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.209677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.209687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.209696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.209704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.209713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.209722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5b90 is same with the state(6) to be set 00:26:37.315 [2024-11-20 14:45:49.210755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.210770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.210782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.210790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.210799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.210808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.210817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.210824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.210833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.210841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.210851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.210858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.210867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.210875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.210885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.210893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.210901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.210910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.210920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.210927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.210936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.210945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.210959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.210966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.210976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.210983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.210992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.210999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.315 [2024-11-20 14:45:49.211360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.315 [2024-11-20 14:45:49.211371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.211838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.211846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e6e50 is same with the state(6) to be set 00:26:37.316 [2024-11-20 14:45:49.212887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.212901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.212912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.212921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.212931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.212938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.212953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.212962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.212971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.212979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.212987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.212995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.213005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.213012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.213021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.213029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.213039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.213051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.213061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.213070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.316 [2024-11-20 14:45:49.213080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.316 [2024-11-20 14:45:49.213087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.317 [2024-11-20 14:45:49.213741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.317 [2024-11-20 14:45:49.213753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.213760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.213769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.213778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.213787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.213795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.213806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.213813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.213823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.213830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.213839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.213846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.213856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.213863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.213872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.213880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.213890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.213899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.213909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.213916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.213924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.213932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.213940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.213953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.213962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.213969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.213979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.213987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.213995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e9420 is same with the state(6) to be set 00:26:37.318 [2024-11-20 14:45:49.215021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.318 [2024-11-20 14:45:49.215448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.318 [2024-11-20 14:45:49.215459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.215986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.215993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.216005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.216013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.216022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.216030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.216040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-20 14:45:49.216047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.319 [2024-11-20 14:45:49.216057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.216065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.216073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.216082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.216091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.216098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.216108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.216116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.216124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ea710 is same with the state(6) to be set 00:26:37.320 [2024-11-20 14:45:49.217173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-20 14:45:49.217758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.320 [2024-11-20 14:45:49.217766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.217775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.217783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.217791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.217798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.217807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.217816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.217824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.217831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.217841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.217847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.217856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.217863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.217873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.217881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.217890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.217898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.217907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.217914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.217924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.217931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.217939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.217952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.217961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.217967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.217978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.217984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.217993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.218001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.218010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.218017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.218027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.218034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.218043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.218050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.218058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.218065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.218075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.218082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.218092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.218100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.218110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.218117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.218126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.218134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.218143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.218151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.218160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.218167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.218176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.218185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.218193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.218200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.218209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.218217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.218226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-20 14:45:49.218234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-20 14:45:49.218243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12eb9c0 is same with the state(6) to be set 00:26:37.321 [2024-11-20 14:45:49.219225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:37.321 [2024-11-20 14:45:49.219243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:26:37.321 [2024-11-20 14:45:49.219256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:26:37.321 [2024-11-20 14:45:49.219267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:26:37.321 [2024-11-20 14:45:49.219339] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:26:37.321 [2024-11-20 14:45:49.219354] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:26:37.321 [2024-11-20 14:45:49.219368] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:26:37.321 [2024-11-20 14:45:49.219443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:26:37.321 [2024-11-20 14:45:49.219456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:26:37.321 task offset: 30208 on job bdev=Nvme2n1 fails 00:26:37.321 00:26:37.321 Latency(us) 00:26:37.321 [2024-11-20T13:45:49.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.321 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.321 Job: Nvme1n1 ended in about 0.80 seconds with error 00:26:37.321 Verification LBA range: start 0x0 length 0x400 00:26:37.321 Nvme1n1 : 0.80 159.63 9.98 79.82 0.00 264109.19 15614.66 242540.19 00:26:37.321 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.321 Job: Nvme2n1 ended in about 0.79 seconds with error 00:26:37.321 Verification LBA range: start 0x0 length 0x400 00:26:37.321 Nvme2n1 : 0.79 242.99 15.19 81.00 0.00 191070.94 14474.91 214274.23 00:26:37.321 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.321 Job: Nvme3n1 ended in about 0.80 seconds with error 00:26:37.321 Verification LBA range: start 0x0 length 0x400 00:26:37.321 Nvme3n1 : 0.80 238.65 14.92 79.55 0.00 190652.44 13164.19 219745.06 00:26:37.321 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.321 Job: Nvme4n1 ended in about 0.80 seconds with error 00:26:37.321 Verification LBA range: start 0x0 length 0x400 00:26:37.321 Nvme4n1 : 0.80 240.99 15.06 80.33 0.00 184732.27 16184.54 218833.25 00:26:37.321 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.321 Job: Nvme5n1 ended in about 0.81 seconds with error 00:26:37.321 Verification LBA range: start 0x0 length 0x400 00:26:37.321 Nvme5n1 : 0.81 158.68 9.92 79.34 0.00 244442.16 17096.35 217009.64 00:26:37.321 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.322 Job: Nvme6n1 ended in about 0.81 seconds with error 00:26:37.322 Verification LBA range: start 0x0 length 0x400 00:26:37.322 Nvme6n1 : 0.81 158.26 9.89 79.13 0.00 239836.53 18919.96 219745.06 00:26:37.322 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.322 Job: Nvme7n1 ended in about 0.79 seconds with error 00:26:37.322 Verification LBA range: start 0x0 length 0x400 00:26:37.322 Nvme7n1 : 0.79 242.62 15.16 80.87 0.00 171508.42 16982.37 218833.25 00:26:37.322 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.322 Job: Nvme8n1 ended in about 0.81 seconds with error 00:26:37.322 Verification LBA range: start 0x0 length 0x400 00:26:37.322 Nvme8n1 : 0.81 157.84 9.87 78.92 0.00 229965.91 14417.92 219745.06 00:26:37.322 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.322 Job: Nvme9n1 ended in about 0.81 seconds with error 00:26:37.322 Verification LBA range: start 0x0 length 0x400 00:26:37.322 Nvme9n1 : 0.81 157.43 9.84 78.72 0.00 225459.87 21769.35 240716.58 00:26:37.322 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:37.322 Job: Nvme10n1 ended in about 0.82 seconds with error 00:26:37.322 Verification LBA range: start 0x0 length 0x400 00:26:37.322 Nvme10n1 : 0.82 157.02 9.81 78.51 0.00 221030.25 20971.52 226127.69 00:26:37.322 [2024-11-20T13:45:49.280Z] =================================================================================================================== 00:26:37.322 [2024-11-20T13:45:49.280Z] Total : 1914.12 119.63 796.18 0.00 212540.82 13164.19 242540.19 00:26:37.581 [2024-11-20 14:45:49.250474] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:37.581 [2024-11-20 14:45:49.250525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:26:37.581 [2024-11-20 14:45:49.250721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.581 [2024-11-20 14:45:49.250741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee61b0 with addr=10.0.0.2, port=4420 00:26:37.581 [2024-11-20 14:45:49.250758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee61b0 is same with the state(6) to be set 00:26:37.581 [2024-11-20 14:45:49.250861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.581 [2024-11-20 14:45:49.250873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee5d30 with addr=10.0.0.2, port=4420 00:26:37.581 [2024-11-20 14:45:49.250880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5d30 is same with the state(6) to be set 00:26:37.581 [2024-11-20 14:45:49.250985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.581 [2024-11-20 14:45:49.250995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1310850 with addr=10.0.0.2, port=4420 00:26:37.581 [2024-11-20 14:45:49.251003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310850 is same with the state(6) to be set 00:26:37.581 [2024-11-20 14:45:49.251134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.581 [2024-11-20 14:45:49.251145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1309940 with addr=10.0.0.2, port=4420 00:26:37.581 [2024-11-20 14:45:49.251153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309940 is same with the state(6) to be set 00:26:37.581 [2024-11-20 14:45:49.252779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:26:37.581 [2024-11-20 14:45:49.252799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:26:37.581 [2024-11-20 14:45:49.253049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.581 [2024-11-20 14:45:49.253065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x135a5b0 with addr=10.0.0.2, port=4420 00:26:37.581 [2024-11-20 14:45:49.253074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a5b0 is same with the state(6) to be set 00:26:37.581 [2024-11-20 14:45:49.253200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.581 [2024-11-20 14:45:49.253212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x135a3d0 with addr=10.0.0.2, port=4420 00:26:37.581 [2024-11-20 14:45:49.253219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135a3d0 is same with the state(6) to be set 00:26:37.581 [2024-11-20 14:45:49.253310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.581 [2024-11-20 14:45:49.253319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1369ce0 with addr=10.0.0.2, port=4420 00:26:37.581 [2024-11-20 14:45:49.253326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1369ce0 is same with the state(6) to be set 00:26:37.581 [2024-11-20 14:45:49.253339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee61b0 (9): Bad file descriptor 00:26:37.581 [2024-11-20 14:45:49.253350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee5d30 (9): Bad file descriptor 00:26:37.581 [2024-11-20 14:45:49.253359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1310850 (9): Bad file descriptor 00:26:37.581 [2024-11-20 14:45:49.253368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1309940 (9): Bad file descriptor 00:26:37.581 [2024-11-20 14:45:49.253397] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:26:37.581 [2024-11-20 14:45:49.253412] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:26:37.581 [2024-11-20 14:45:49.253423] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:26:37.581 [2024-11-20 14:45:49.253433] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:26:37.581 [2024-11-20 14:45:49.253446] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:26:37.582 [2024-11-20 14:45:49.253703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:26:37.582 [2024-11-20 14:45:49.253825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.582 [2024-11-20 14:45:49.253839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeda1e0 with addr=10.0.0.2, port=4420 00:26:37.582 [2024-11-20 14:45:49.253847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeda1e0 is same with the state(6) to be set 00:26:37.582 [2024-11-20 14:45:49.253930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.582 [2024-11-20 14:45:49.253939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1311ae0 with addr=10.0.0.2, port=4420 00:26:37.582 [2024-11-20 14:45:49.253972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311ae0 is same with the state(6) to be set 00:26:37.582 [2024-11-20 14:45:49.253983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135a5b0 (9): Bad file descriptor 00:26:37.582 [2024-11-20 14:45:49.253994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135a3d0 (9): Bad file descriptor 00:26:37.582 [2024-11-20 14:45:49.254003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1369ce0 (9): Bad file descriptor 00:26:37.582 [2024-11-20 14:45:49.254011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:37.582 [2024-11-20 14:45:49.254018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:37.582 [2024-11-20 14:45:49.254026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:37.582 [2024-11-20 14:45:49.254035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:37.582 [2024-11-20 14:45:49.254044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:26:37.582 [2024-11-20 14:45:49.254050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:26:37.582 [2024-11-20 14:45:49.254067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:37.582 [2024-11-20 14:45:49.254073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:26:37.582 [2024-11-20 14:45:49.254080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:26:37.582 [2024-11-20 14:45:49.254086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:26:37.582 [2024-11-20 14:45:49.254093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:37.582 [2024-11-20 14:45:49.254099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:26:37.582 [2024-11-20 14:45:49.254106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:26:37.582 [2024-11-20 14:45:49.254112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:26:37.582 [2024-11-20 14:45:49.254118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:37.582 [2024-11-20 14:45:49.254125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:26:37.582 [2024-11-20 14:45:49.254353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.582 [2024-11-20 14:45:49.254369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed9fe0 with addr=10.0.0.2, port=4420 00:26:37.582 [2024-11-20 14:45:49.254376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed9fe0 is same with the state(6) to be set 00:26:37.582 [2024-11-20 14:45:49.254385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeda1e0 (9): Bad file descriptor 00:26:37.582 [2024-11-20 14:45:49.254394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1311ae0 (9): Bad file descriptor 00:26:37.582 [2024-11-20 14:45:49.254402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:26:37.582 [2024-11-20 14:45:49.254408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:26:37.582 [2024-11-20 14:45:49.254414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:37.582 [2024-11-20 14:45:49.254420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:26:37.582 [2024-11-20 14:45:49.254427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:26:37.582 [2024-11-20 14:45:49.254433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:26:37.582 [2024-11-20 14:45:49.254440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:26:37.582 [2024-11-20 14:45:49.254446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:26:37.582 [2024-11-20 14:45:49.254452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:26:37.582 [2024-11-20 14:45:49.254458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:26:37.582 [2024-11-20 14:45:49.254464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:37.582 [2024-11-20 14:45:49.254470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:26:37.582 [2024-11-20 14:45:49.254498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed9fe0 (9): Bad file descriptor 00:26:37.582 [2024-11-20 14:45:49.254507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:26:37.582 [2024-11-20 14:45:49.254513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:26:37.582 [2024-11-20 14:45:49.254520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:37.582 [2024-11-20 14:45:49.254526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:26:37.582 [2024-11-20 14:45:49.254532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:26:37.582 [2024-11-20 14:45:49.254538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:26:37.582 [2024-11-20 14:45:49.254545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:37.582 [2024-11-20 14:45:49.254551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:26:37.582 [2024-11-20 14:45:49.254573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:26:37.582 [2024-11-20 14:45:49.254581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:26:37.582 [2024-11-20 14:45:49.254588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:37.582 [2024-11-20 14:45:49.254595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:26:37.841 14:45:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:26:38.779 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1656329 00:26:38.779 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:26:38.779 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1656329 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1656329 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:38.780 rmmod nvme_tcp 00:26:38.780 rmmod nvme_fabrics 00:26:38.780 rmmod nvme_keyring 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1656053 ']' 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1656053 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1656053 ']' 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1656053 00:26:38.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1656053) - No such process 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1656053 is not found' 00:26:38.780 Process with pid 1656053 is not found 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.780 14:45:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.317 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:41.317 00:26:41.317 real 0m7.763s 00:26:41.317 user 0m18.914s 00:26:41.317 sys 0m1.334s 00:26:41.317 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:41.317 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:41.317 ************************************ 00:26:41.317 END TEST nvmf_shutdown_tc3 00:26:41.317 ************************************ 00:26:41.317 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:26:41.317 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:26:41.317 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:26:41.317 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:41.317 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:41.317 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:41.317 ************************************ 00:26:41.317 START TEST nvmf_shutdown_tc4 00:26:41.317 ************************************ 00:26:41.317 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:26:41.317 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:26:41.317 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:41.317 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:41.317 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.317 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:41.317 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:41.318 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:41.318 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:41.318 Found net devices under 0000:86:00.0: cvl_0_0 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:41.318 Found net devices under 0000:86:00.1: cvl_0_1 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:41.318 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:41.319 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:41.319 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:41.319 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:41.319 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:41.319 14:45:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:41.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:26:41.319 00:26:41.319 --- 10.0.0.2 ping statistics --- 00:26:41.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.319 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:41.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:26:41.319 00:26:41.319 --- 10.0.0.1 ping statistics --- 00:26:41.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.319 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1657358 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1657358 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1657358 ']' 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.319 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:41.319 [2024-11-20 14:45:53.154583] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:26:41.319 [2024-11-20 14:45:53.154631] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.319 [2024-11-20 14:45:53.233010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:41.578 [2024-11-20 14:45:53.275863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.578 [2024-11-20 14:45:53.275898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.578 [2024-11-20 14:45:53.275905] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.578 [2024-11-20 14:45:53.275911] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.578 [2024-11-20 14:45:53.275916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.578 [2024-11-20 14:45:53.277565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:41.578 [2024-11-20 14:45:53.277659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:41.578 [2024-11-20 14:45:53.277765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.578 [2024-11-20 14:45:53.277765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:41.578 [2024-11-20 14:45:53.415275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:41.578 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:41.579 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:41.579 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:41.579 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:41.579 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:41.579 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:41.579 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:41.579 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.579 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:41.579 Malloc1 00:26:41.579 [2024-11-20 14:45:53.528481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.837 Malloc2 00:26:41.838 Malloc3 00:26:41.838 Malloc4 00:26:41.838 Malloc5 00:26:41.838 Malloc6 00:26:41.838 Malloc7 00:26:42.096 Malloc8 00:26:42.096 Malloc9 00:26:42.096 Malloc10 00:26:42.096 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.096 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:42.096 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:42.096 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:42.096 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1657626 00:26:42.096 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:26:42.096 14:45:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:26:42.096 [2024-11-20 14:45:54.043426] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:47.375 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:47.375 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1657358 00:26:47.375 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1657358 ']' 00:26:47.375 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1657358 00:26:47.375 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:26:47.375 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:47.375 14:45:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1657358 00:26:47.375 14:45:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:47.375 14:45:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:47.375 14:45:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1657358' 00:26:47.375 killing process with pid 1657358 00:26:47.375 14:45:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1657358 00:26:47.375 14:45:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1657358 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 starting I/O failed: -6 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 starting I/O failed: -6 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 starting I/O failed: -6 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 starting I/O failed: -6 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 [2024-11-20 14:45:59.037231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(6) to be set 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 starting I/O failed: -6 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 starting I/O failed: -6 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 starting I/O failed: -6 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 starting I/O failed: -6 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 starting I/O failed: -6 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.375 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 [2024-11-20 14:45:59.037610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.376 starting I/O failed: -6 00:26:47.376 starting I/O failed: -6 00:26:47.376 starting I/O failed: -6 00:26:47.376 starting I/O failed: -6 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 [2024-11-20 14:45:59.038754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 [2024-11-20 14:45:59.039783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.376 Write completed with error (sct=0, sc=8) 00:26:47.376 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 [2024-11-20 14:45:59.041502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.377 NVMe io qpair process completion error 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 [2024-11-20 14:45:59.045273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 starting I/O failed: -6 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.377 Write completed with error (sct=0, sc=8) 00:26:47.378 [2024-11-20 14:45:59.046086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 [2024-11-20 14:45:59.047146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.378 Write completed with error (sct=0, sc=8) 00:26:47.378 starting I/O failed: -6 00:26:47.379 [2024-11-20 14:45:59.048613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851930 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.048645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851930 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.048653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851930 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.048660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851930 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.048671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851930 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.048677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851930 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.048684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851930 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.048689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851930 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.048695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851930 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.048702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851930 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.048845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.379 NVMe io qpair process completion error 00:26:47.379 [2024-11-20 14:45:59.048965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851e00 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.048989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851e00 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.048997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851e00 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.049004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851e00 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.049010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851e00 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.049017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851e00 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.049023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851e00 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.049029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851e00 is same with the state(6) to be set 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 [2024-11-20 14:45:59.049641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18522d0 is same with Write completed with error (sct=0, sc=8) 00:26:47.379 the state(6) to be set 00:26:47.379 starting I/O failed: -6 00:26:47.379 [2024-11-20 14:45:59.049664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18522d0 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.049673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18522d0 is same with Write completed with error (sct=0, sc=8) 00:26:47.379 the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.049685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18522d0 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.049691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18522d0 is same with Write completed with error (sct=0, sc=8) 00:26:47.379 the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.049699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18522d0 is same with the state(6) to be set 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 [2024-11-20 14:45:59.049855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.379 starting I/O failed: -6 00:26:47.379 [2024-11-20 14:45:59.049953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851460 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.049973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851460 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.049981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851460 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.049987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851460 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.049994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851460 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.050000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851460 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.050006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851460 is same with the state(6) to be set 00:26:47.379 [2024-11-20 14:45:59.050012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851460 is same with the state(6) to be set 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.379 [2024-11-20 14:45:59.050799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 Write completed with error (sct=0, sc=8) 00:26:47.379 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 [2024-11-20 14:45:59.051796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.380 Write completed with error (sct=0, sc=8) 00:26:47.380 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 [2024-11-20 14:45:59.053320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.381 NVMe io qpair process completion error 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 [2024-11-20 14:45:59.054364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 [2024-11-20 14:45:59.055207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.381 starting I/O failed: -6 00:26:47.381 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 [2024-11-20 14:45:59.056237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 [2024-11-20 14:45:59.058316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.382 NVMe io qpair process completion error 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 starting I/O failed: -6 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.382 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 [2024-11-20 14:45:59.059374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 [2024-11-20 14:45:59.060297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.383 starting I/O failed: -6 00:26:47.383 [2024-11-20 14:45:59.061326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.383 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 [2024-11-20 14:45:59.066666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.384 NVMe io qpair process completion error 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 [2024-11-20 14:45:59.068054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:47.384 starting I/O failed: -6 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.384 starting I/O failed: -6 00:26:47.384 Write completed with error (sct=0, sc=8) 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 [2024-11-20 14:45:59.068945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 [2024-11-20 14:45:59.070049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.385 starting I/O failed: -6 00:26:47.385 Write completed with error (sct=0, sc=8) 00:26:47.386 starting I/O failed: -6 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 starting I/O failed: -6 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 starting I/O failed: -6 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 starting I/O failed: -6 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 starting I/O failed: -6 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 starting I/O failed: -6 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 starting I/O failed: -6 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 starting I/O failed: -6 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 starting I/O failed: -6 00:26:47.386 [2024-11-20 14:45:59.074455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.386 NVMe io qpair process completion error 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 starting I/O failed: -6 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 starting I/O failed: -6 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 starting I/O failed: -6 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 starting I/O failed: -6 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 starting I/O failed: -6 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 starting I/O failed: -6 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.386 Write completed with error (sct=0, sc=8) 00:26:47.387 starting I/O failed: -6 00:26:47.387 Write completed with error (sct=0, sc=8) 00:26:47.387 Write completed with error (sct=0, sc=8) 00:26:47.387 Write completed with error (sct=0, sc=8) 00:26:47.387 Write completed with error (sct=0, sc=8) 00:26:47.388 starting I/O failed: -6 00:26:47.388 Write completed with error (sct=0, sc=8) 00:26:47.388 Write completed with error (sct=0, sc=8) 00:26:47.388 Write completed with error (sct=0, sc=8) 00:26:47.388 Write completed with error (sct=0, sc=8) 00:26:47.388 starting I/O failed: -6 00:26:47.388 Write completed with error (sct=0, sc=8) 00:26:47.388 Write completed with error (sct=0, sc=8) 00:26:47.388 Write completed with error (sct=0, sc=8) 00:26:47.388 Write completed with error (sct=0, sc=8) 00:26:47.388 starting I/O failed: -6 00:26:47.388 Write completed with error (sct=0, sc=8) 00:26:47.388 [2024-11-20 14:45:59.075444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.388 starting I/O failed: -6 00:26:47.388 starting I/O failed: -6 00:26:47.388 Write completed with error (sct=0, sc=8) 00:26:47.388 Write completed with error (sct=0, sc=8) 00:26:47.388 starting I/O failed: -6 00:26:47.388 Write completed with error (sct=0, sc=8) 00:26:47.388 starting I/O failed: -6 00:26:47.388 Write completed with error (sct=0, sc=8) 00:26:47.389 Write completed with error (sct=0, sc=8) 00:26:47.389 Write completed with error (sct=0, sc=8) 00:26:47.389 starting I/O failed: -6 00:26:47.389 Write completed with error (sct=0, sc=8) 00:26:47.389 starting I/O failed: -6 00:26:47.389 Write completed with error (sct=0, sc=8) 00:26:47.389 Write completed with error (sct=0, sc=8) 00:26:47.389 Write completed with error (sct=0, sc=8) 00:26:47.389 starting I/O failed: -6 00:26:47.389 Write completed with error (sct=0, sc=8) 00:26:47.389 starting I/O failed: -6 00:26:47.389 Write completed with error (sct=0, sc=8) 00:26:47.389 Write completed with error (sct=0, sc=8) 00:26:47.389 Write completed with error (sct=0, sc=8) 00:26:47.389 starting I/O failed: -6 00:26:47.389 Write completed with error (sct=0, sc=8) 00:26:47.389 starting I/O failed: -6 00:26:47.389 Write completed with error (sct=0, sc=8) 00:26:47.389 Write completed with error (sct=0, sc=8) 00:26:47.389 Write completed with error (sct=0, sc=8) 00:26:47.389 starting I/O failed: -6 00:26:47.389 Write completed with error (sct=0, sc=8) 00:26:47.389 starting I/O failed: -6 00:26:47.389 Write completed with error (sct=0, sc=8) 00:26:47.389 Write completed with error (sct=0, sc=8) 00:26:47.390 Write completed with error (sct=0, sc=8) 00:26:47.390 starting I/O failed: -6 00:26:47.390 Write completed with error (sct=0, sc=8) 00:26:47.390 starting I/O failed: -6 00:26:47.390 Write completed with error (sct=0, sc=8) 00:26:47.390 Write completed with error (sct=0, sc=8) 00:26:47.390 Write completed with error (sct=0, sc=8) 00:26:47.390 starting I/O failed: -6 00:26:47.390 Write completed with error (sct=0, sc=8) 00:26:47.390 starting I/O failed: -6 00:26:47.390 Write completed with error (sct=0, sc=8) 00:26:47.390 Write completed with error (sct=0, sc=8) 00:26:47.390 Write completed with error (sct=0, sc=8) 00:26:47.390 starting I/O failed: -6 00:26:47.390 Write completed with error (sct=0, sc=8) 00:26:47.390 starting I/O failed: -6 00:26:47.390 Write completed with error (sct=0, sc=8) 00:26:47.390 Write completed with error (sct=0, sc=8) 00:26:47.391 Write completed with error (sct=0, sc=8) 00:26:47.391 starting I/O failed: -6 00:26:47.391 Write completed with error (sct=0, sc=8) 00:26:47.391 starting I/O failed: -6 00:26:47.391 [2024-11-20 14:45:59.076319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:47.391 Write completed with error (sct=0, sc=8) 00:26:47.391 starting I/O failed: -6 00:26:47.391 Write completed with error (sct=0, sc=8) 00:26:47.391 starting I/O failed: -6 00:26:47.391 Write completed with error (sct=0, sc=8) 00:26:47.391 starting I/O failed: -6 00:26:47.391 Write completed with error (sct=0, sc=8) 00:26:47.391 Write completed with error (sct=0, sc=8) 00:26:47.391 starting I/O failed: -6 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 starting I/O failed: -6 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 starting I/O failed: -6 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 starting I/O failed: -6 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 starting I/O failed: -6 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 starting I/O failed: -6 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 starting I/O failed: -6 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 starting I/O failed: -6 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 starting I/O failed: -6 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 starting I/O failed: -6 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 starting I/O failed: -6 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 starting I/O failed: -6 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 starting I/O failed: -6 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 starting I/O failed: -6 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 starting I/O failed: -6 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.392 starting I/O failed: -6 00:26:47.392 Write completed with error (sct=0, sc=8) 00:26:47.393 starting I/O failed: -6 00:26:47.393 Write completed with error (sct=0, sc=8) 00:26:47.393 starting I/O failed: -6 00:26:47.393 Write completed with error (sct=0, sc=8) 00:26:47.393 Write completed with error (sct=0, sc=8) 00:26:47.393 starting I/O failed: -6 00:26:47.393 Write completed with error (sct=0, sc=8) 00:26:47.393 starting I/O failed: -6 00:26:47.393 Write completed with error (sct=0, sc=8) 00:26:47.393 starting I/O failed: -6 00:26:47.393 Write completed with error (sct=0, sc=8) 00:26:47.394 Write completed with error (sct=0, sc=8) 00:26:47.394 starting I/O failed: -6 00:26:47.394 Write completed with error (sct=0, sc=8) 00:26:47.394 starting I/O failed: -6 00:26:47.394 Write completed with error (sct=0, sc=8) 00:26:47.394 starting I/O failed: -6 00:26:47.394 Write completed with error (sct=0, sc=8) 00:26:47.394 Write completed with error (sct=0, sc=8) 00:26:47.394 starting I/O failed: -6 00:26:47.394 Write completed with error (sct=0, sc=8) 00:26:47.394 starting I/O failed: -6 00:26:47.394 Write completed with error (sct=0, sc=8) 00:26:47.394 starting I/O failed: -6 00:26:47.394 Write completed with error (sct=0, sc=8) 00:26:47.394 Write completed with error (sct=0, sc=8) 00:26:47.394 starting I/O failed: -6 00:26:47.394 Write completed with error (sct=0, sc=8) 00:26:47.394 starting I/O failed: -6 00:26:47.394 Write completed with error (sct=0, sc=8) 00:26:47.394 starting I/O failed: -6 00:26:47.395 Write completed with error (sct=0, sc=8) 00:26:47.395 Write completed with error (sct=0, sc=8) 00:26:47.395 starting I/O failed: -6 00:26:47.395 Write completed with error (sct=0, sc=8) 00:26:47.395 starting I/O failed: -6 00:26:47.395 Write completed with error (sct=0, sc=8) 00:26:47.395 starting I/O failed: -6 00:26:47.395 Write completed with error (sct=0, sc=8) 00:26:47.395 Write completed with error (sct=0, sc=8) 00:26:47.395 starting I/O failed: -6 00:26:47.395 Write completed with error (sct=0, sc=8) 00:26:47.395 starting I/O failed: -6 00:26:47.395 Write completed with error (sct=0, sc=8) 00:26:47.395 starting I/O failed: -6 00:26:47.395 [2024-11-20 14:45:59.077424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.395 Write completed with error (sct=0, sc=8) 00:26:47.395 starting I/O failed: -6 00:26:47.395 Write completed with error (sct=0, sc=8) 00:26:47.395 starting I/O failed: -6 00:26:47.395 Write completed with error (sct=0, sc=8) 00:26:47.395 starting I/O failed: -6 00:26:47.395 Write completed with error (sct=0, sc=8) 00:26:47.395 starting I/O failed: -6 00:26:47.395 Write completed with error (sct=0, sc=8) 00:26:47.396 starting I/O failed: -6 00:26:47.396 Write completed with error (sct=0, sc=8) 00:26:47.396 starting I/O failed: -6 00:26:47.396 Write completed with error (sct=0, sc=8) 00:26:47.396 starting I/O failed: -6 00:26:47.396 Write completed with error (sct=0, sc=8) 00:26:47.396 starting I/O failed: -6 00:26:47.396 Write completed with error (sct=0, sc=8) 00:26:47.396 starting I/O failed: -6 00:26:47.396 Write completed with error (sct=0, sc=8) 00:26:47.396 starting I/O failed: -6 00:26:47.396 Write completed with error (sct=0, sc=8) 00:26:47.396 starting I/O failed: -6 00:26:47.396 Write completed with error (sct=0, sc=8) 00:26:47.396 starting I/O failed: -6 00:26:47.396 Write completed with error (sct=0, sc=8) 00:26:47.396 starting I/O failed: -6 00:26:47.396 Write completed with error (sct=0, sc=8) 00:26:47.396 starting I/O failed: -6 00:26:47.396 Write completed with error (sct=0, sc=8) 00:26:47.396 starting I/O failed: -6 00:26:47.396 Write completed with error (sct=0, sc=8) 00:26:47.396 starting I/O failed: -6 00:26:47.396 Write completed with error (sct=0, sc=8) 00:26:47.396 starting I/O failed: -6 00:26:47.396 Write completed with error (sct=0, sc=8) 00:26:47.397 starting I/O failed: -6 00:26:47.397 Write completed with error (sct=0, sc=8) 00:26:47.397 starting I/O failed: -6 00:26:47.397 Write completed with error (sct=0, sc=8) 00:26:47.397 starting I/O failed: -6 00:26:47.397 Write completed with error (sct=0, sc=8) 00:26:47.397 starting I/O failed: -6 00:26:47.397 Write completed with error (sct=0, sc=8) 00:26:47.397 starting I/O failed: -6 00:26:47.397 Write completed with error (sct=0, sc=8) 00:26:47.397 starting I/O failed: -6 00:26:47.397 Write completed with error (sct=0, sc=8) 00:26:47.397 starting I/O failed: -6 00:26:47.397 Write completed with error (sct=0, sc=8) 00:26:47.397 starting I/O failed: -6 00:26:47.397 Write completed with error (sct=0, sc=8) 00:26:47.397 starting I/O failed: -6 00:26:47.397 Write completed with error (sct=0, sc=8) 00:26:47.397 starting I/O failed: -6 00:26:47.397 Write completed with error (sct=0, sc=8) 00:26:47.397 starting I/O failed: -6 00:26:47.397 Write completed with error (sct=0, sc=8) 00:26:47.397 starting I/O failed: -6 00:26:47.397 Write completed with error (sct=0, sc=8) 00:26:47.397 starting I/O failed: -6 00:26:47.397 Write completed with error (sct=0, sc=8) 00:26:47.397 starting I/O failed: -6 00:26:47.397 Write completed with error (sct=0, sc=8) 00:26:47.397 starting I/O failed: -6 00:26:47.397 Write completed with error (sct=0, sc=8) 00:26:47.397 starting I/O failed: -6 00:26:47.397 Write completed with error (sct=0, sc=8) 00:26:47.397 starting I/O failed: -6 00:26:47.397 Write completed with error (sct=0, sc=8) 00:26:47.397 starting I/O failed: -6 00:26:47.398 Write completed with error (sct=0, sc=8) 00:26:47.398 starting I/O failed: -6 00:26:47.398 Write completed with error (sct=0, sc=8) 00:26:47.398 starting I/O failed: -6 00:26:47.398 Write completed with error (sct=0, sc=8) 00:26:47.398 starting I/O failed: -6 00:26:47.398 Write completed with error (sct=0, sc=8) 00:26:47.398 starting I/O failed: -6 00:26:47.398 Write completed with error (sct=0, sc=8) 00:26:47.398 starting I/O failed: -6 00:26:47.398 Write completed with error (sct=0, sc=8) 00:26:47.398 starting I/O failed: -6 00:26:47.398 Write completed with error (sct=0, sc=8) 00:26:47.398 starting I/O failed: -6 00:26:47.398 Write completed with error (sct=0, sc=8) 00:26:47.398 starting I/O failed: -6 00:26:47.398 Write completed with error (sct=0, sc=8) 00:26:47.398 starting I/O failed: -6 00:26:47.398 Write completed with error (sct=0, sc=8) 00:26:47.398 starting I/O failed: -6 00:26:47.398 Write completed with error (sct=0, sc=8) 00:26:47.398 starting I/O failed: -6 00:26:47.398 Write completed with error (sct=0, sc=8) 00:26:47.398 starting I/O failed: -6 00:26:47.398 Write completed with error (sct=0, sc=8) 00:26:47.398 starting I/O failed: -6 00:26:47.398 Write completed with error (sct=0, sc=8) 00:26:47.398 starting I/O failed: -6 00:26:47.398 Write completed with error (sct=0, sc=8) 00:26:47.398 starting I/O failed: -6 00:26:47.398 Write completed with error (sct=0, sc=8) 00:26:47.398 starting I/O failed: -6 00:26:47.398 Write completed with error (sct=0, sc=8) 00:26:47.399 starting I/O failed: -6 00:26:47.399 Write completed with error (sct=0, sc=8) 00:26:47.399 starting I/O failed: -6 00:26:47.399 Write completed with error (sct=0, sc=8) 00:26:47.399 starting I/O failed: -6 00:26:47.399 Write completed with error (sct=0, sc=8) 00:26:47.399 starting I/O failed: -6 00:26:47.399 Write completed with error (sct=0, sc=8) 00:26:47.399 starting I/O failed: -6 00:26:47.399 Write completed with error (sct=0, sc=8) 00:26:47.399 starting I/O failed: -6 00:26:47.399 Write completed with error (sct=0, sc=8) 00:26:47.399 starting I/O failed: -6 00:26:47.399 Write completed with error (sct=0, sc=8) 00:26:47.399 starting I/O failed: -6 00:26:47.399 [2024-11-20 14:45:59.079047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.399 NVMe io qpair process completion error 00:26:47.399 Write completed with error (sct=0, sc=8) 00:26:47.399 starting I/O failed: -6 00:26:47.399 Write completed with error (sct=0, sc=8) 00:26:47.399 Write completed with error (sct=0, sc=8) 00:26:47.399 Write completed with error (sct=0, sc=8) 00:26:47.399 Write completed with error (sct=0, sc=8) 00:26:47.399 starting I/O failed: -6 00:26:47.399 Write completed with error (sct=0, sc=8) 00:26:47.400 Write completed with error (sct=0, sc=8) 00:26:47.400 Write completed with error (sct=0, sc=8) 00:26:47.400 Write completed with error (sct=0, sc=8) 00:26:47.400 starting I/O failed: -6 00:26:47.400 Write completed with error (sct=0, sc=8) 00:26:47.400 Write completed with error (sct=0, sc=8) 00:26:47.400 Write completed with error (sct=0, sc=8) 00:26:47.400 Write completed with error (sct=0, sc=8) 00:26:47.400 starting I/O failed: -6 00:26:47.400 Write completed with error (sct=0, sc=8) 00:26:47.400 Write completed with error (sct=0, sc=8) 00:26:47.400 Write completed with error (sct=0, sc=8) 00:26:47.400 Write completed with error (sct=0, sc=8) 00:26:47.400 starting I/O failed: -6 00:26:47.400 Write completed with error (sct=0, sc=8) 00:26:47.400 Write completed with error (sct=0, sc=8) 00:26:47.400 Write completed with error (sct=0, sc=8) 00:26:47.400 Write completed with error (sct=0, sc=8) 00:26:47.400 starting I/O failed: -6 00:26:47.400 Write completed with error (sct=0, sc=8) 00:26:47.400 Write completed with error (sct=0, sc=8) 00:26:47.400 Write completed with error (sct=0, sc=8) 00:26:47.400 Write completed with error (sct=0, sc=8) 00:26:47.400 starting I/O failed: -6 00:26:47.401 Write completed with error (sct=0, sc=8) 00:26:47.401 Write completed with error (sct=0, sc=8) 00:26:47.401 Write completed with error (sct=0, sc=8) 00:26:47.401 Write completed with error (sct=0, sc=8) 00:26:47.401 starting I/O failed: -6 00:26:47.401 Write completed with error (sct=0, sc=8) 00:26:47.401 Write completed with error (sct=0, sc=8) 00:26:47.401 Write completed with error (sct=0, sc=8) 00:26:47.401 Write completed with error (sct=0, sc=8) 00:26:47.401 starting I/O failed: -6 00:26:47.401 Write completed with error (sct=0, sc=8) 00:26:47.401 Write completed with error (sct=0, sc=8) 00:26:47.401 Write completed with error (sct=0, sc=8) 00:26:47.401 Write completed with error (sct=0, sc=8) 00:26:47.401 starting I/O failed: -6 00:26:47.401 Write completed with error (sct=0, sc=8) 00:26:47.401 [2024-11-20 14:45:59.080081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.401 Write completed with error (sct=0, sc=8) 00:26:47.401 starting I/O failed: -6 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.402 starting I/O failed: -6 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.402 starting I/O failed: -6 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.402 starting I/O failed: -6 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.402 starting I/O failed: -6 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.402 starting I/O failed: -6 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.402 starting I/O failed: -6 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.402 starting I/O failed: -6 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.402 starting I/O failed: -6 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.402 starting I/O failed: -6 00:26:47.402 Write completed with error (sct=0, sc=8) 00:26:47.403 starting I/O failed: -6 00:26:47.403 Write completed with error (sct=0, sc=8) 00:26:47.403 Write completed with error (sct=0, sc=8) 00:26:47.403 Write completed with error (sct=0, sc=8) 00:26:47.403 starting I/O failed: -6 00:26:47.403 Write completed with error (sct=0, sc=8) 00:26:47.403 starting I/O failed: -6 00:26:47.403 Write completed with error (sct=0, sc=8) 00:26:47.403 Write completed with error (sct=0, sc=8) 00:26:47.403 Write completed with error (sct=0, sc=8) 00:26:47.403 starting I/O failed: -6 00:26:47.403 Write completed with error (sct=0, sc=8) 00:26:47.403 starting I/O failed: -6 00:26:47.403 Write completed with error (sct=0, sc=8) 00:26:47.403 Write completed with error (sct=0, sc=8) 00:26:47.403 Write completed with error (sct=0, sc=8) 00:26:47.403 starting I/O failed: -6 00:26:47.403 Write completed with error (sct=0, sc=8) 00:26:47.403 starting I/O failed: -6 00:26:47.403 Write completed with error (sct=0, sc=8) 00:26:47.403 Write completed with error (sct=0, sc=8) 00:26:47.404 Write completed with error (sct=0, sc=8) 00:26:47.404 starting I/O failed: -6 00:26:47.404 Write completed with error (sct=0, sc=8) 00:26:47.404 starting I/O failed: -6 00:26:47.404 Write completed with error (sct=0, sc=8) 00:26:47.404 Write completed with error (sct=0, sc=8) 00:26:47.404 Write completed with error (sct=0, sc=8) 00:26:47.404 starting I/O failed: -6 00:26:47.404 Write completed with error (sct=0, sc=8) 00:26:47.404 starting I/O failed: -6 00:26:47.404 Write completed with error (sct=0, sc=8) 00:26:47.404 Write completed with error (sct=0, sc=8) 00:26:47.404 Write completed with error (sct=0, sc=8) 00:26:47.404 starting I/O failed: -6 00:26:47.404 [2024-11-20 14:45:59.080995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:47.404 Write completed with error (sct=0, sc=8) 00:26:47.404 starting I/O failed: -6 00:26:47.404 Write completed with error (sct=0, sc=8) 00:26:47.404 Write completed with error (sct=0, sc=8) 00:26:47.404 starting I/O failed: -6 00:26:47.404 Write completed with error (sct=0, sc=8) 00:26:47.404 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 Write completed with error (sct=0, sc=8) 00:26:47.405 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 [2024-11-20 14:45:59.082049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.406 starting I/O failed: -6 00:26:47.406 Write completed with error (sct=0, sc=8) 00:26:47.407 starting I/O failed: -6 00:26:47.407 Write completed with error (sct=0, sc=8) 00:26:47.407 starting I/O failed: -6 00:26:47.407 Write completed with error (sct=0, sc=8) 00:26:47.407 starting I/O failed: -6 00:26:47.407 Write completed with error (sct=0, sc=8) 00:26:47.407 starting I/O failed: -6 00:26:47.407 Write completed with error (sct=0, sc=8) 00:26:47.407 starting I/O failed: -6 00:26:47.407 Write completed with error (sct=0, sc=8) 00:26:47.407 starting I/O failed: -6 00:26:47.407 Write completed with error (sct=0, sc=8) 00:26:47.407 starting I/O failed: -6 00:26:47.407 Write completed with error (sct=0, sc=8) 00:26:47.407 starting I/O failed: -6 00:26:47.407 Write completed with error (sct=0, sc=8) 00:26:47.407 starting I/O failed: -6 00:26:47.407 Write completed with error (sct=0, sc=8) 00:26:47.407 starting I/O failed: -6 00:26:47.407 Write completed with error (sct=0, sc=8) 00:26:47.407 starting I/O failed: -6 00:26:47.407 Write completed with error (sct=0, sc=8) 00:26:47.407 starting I/O failed: -6 00:26:47.407 Write completed with error (sct=0, sc=8) 00:26:47.407 starting I/O failed: -6 00:26:47.407 Write completed with error (sct=0, sc=8) 00:26:47.407 starting I/O failed: -6 00:26:47.407 Write completed with error (sct=0, sc=8) 00:26:47.407 starting I/O failed: -6 00:26:47.407 Write completed with error (sct=0, sc=8) 00:26:47.407 starting I/O failed: -6 00:26:47.407 Write completed with error (sct=0, sc=8) 00:26:47.407 starting I/O failed: -6 00:26:47.407 Write completed with error (sct=0, sc=8) 00:26:47.407 starting I/O failed: -6 00:26:47.407 Write completed with error (sct=0, sc=8) 00:26:47.407 starting I/O failed: -6 00:26:47.407 Write completed with error (sct=0, sc=8) 00:26:47.408 starting I/O failed: -6 00:26:47.408 Write completed with error (sct=0, sc=8) 00:26:47.408 starting I/O failed: -6 00:26:47.408 Write completed with error (sct=0, sc=8) 00:26:47.408 starting I/O failed: -6 00:26:47.408 Write completed with error (sct=0, sc=8) 00:26:47.408 starting I/O failed: -6 00:26:47.408 Write completed with error (sct=0, sc=8) 00:26:47.408 starting I/O failed: -6 00:26:47.408 Write completed with error (sct=0, sc=8) 00:26:47.408 starting I/O failed: -6 00:26:47.408 Write completed with error (sct=0, sc=8) 00:26:47.408 starting I/O failed: -6 00:26:47.408 Write completed with error (sct=0, sc=8) 00:26:47.408 starting I/O failed: -6 00:26:47.408 Write completed with error (sct=0, sc=8) 00:26:47.408 starting I/O failed: -6 00:26:47.408 Write completed with error (sct=0, sc=8) 00:26:47.408 starting I/O failed: -6 00:26:47.408 Write completed with error (sct=0, sc=8) 00:26:47.408 starting I/O failed: -6 00:26:47.408 Write completed with error (sct=0, sc=8) 00:26:47.408 starting I/O failed: -6 00:26:47.408 Write completed with error (sct=0, sc=8) 00:26:47.408 starting I/O failed: -6 00:26:47.408 Write completed with error (sct=0, sc=8) 00:26:47.408 starting I/O failed: -6 00:26:47.408 Write completed with error (sct=0, sc=8) 00:26:47.408 starting I/O failed: -6 00:26:47.408 Write completed with error (sct=0, sc=8) 00:26:47.408 starting I/O failed: -6 00:26:47.408 Write completed with error (sct=0, sc=8) 00:26:47.408 starting I/O failed: -6 00:26:47.408 Write completed with error (sct=0, sc=8) 00:26:47.408 starting I/O failed: -6 00:26:47.408 [2024-11-20 14:45:59.087910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.408 NVMe io qpair process completion error 00:26:47.408 Write completed with error (sct=0, sc=8) 00:26:47.408 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 starting I/O failed: -6 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 starting I/O failed: -6 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 starting I/O failed: -6 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 starting I/O failed: -6 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 starting I/O failed: -6 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 starting I/O failed: -6 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 starting I/O failed: -6 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 starting I/O failed: -6 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 starting I/O failed: -6 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 [2024-11-20 14:45:59.088929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 starting I/O failed: -6 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 starting I/O failed: -6 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 starting I/O failed: -6 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 starting I/O failed: -6 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 starting I/O failed: -6 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 starting I/O failed: -6 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.409 starting I/O failed: -6 00:26:47.409 Write completed with error (sct=0, sc=8) 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 [2024-11-20 14:45:59.089863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:47.410 starting I/O failed: -6 00:26:47.410 starting I/O failed: -6 00:26:47.410 starting I/O failed: -6 00:26:47.410 starting I/O failed: -6 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.410 starting I/O failed: -6 00:26:47.410 Write completed with error (sct=0, sc=8) 00:26:47.411 starting I/O failed: -6 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.411 starting I/O failed: -6 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.411 starting I/O failed: -6 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.411 starting I/O failed: -6 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.411 starting I/O failed: -6 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.411 starting I/O failed: -6 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.411 starting I/O failed: -6 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.411 starting I/O failed: -6 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.411 starting I/O failed: -6 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.411 starting I/O failed: -6 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.411 starting I/O failed: -6 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.411 starting I/O failed: -6 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.411 starting I/O failed: -6 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.411 starting I/O failed: -6 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.411 starting I/O failed: -6 00:26:47.411 Write completed with error (sct=0, sc=8) 00:26:47.412 starting I/O failed: -6 00:26:47.412 Write completed with error (sct=0, sc=8) 00:26:47.412 Write completed with error (sct=0, sc=8) 00:26:47.412 starting I/O failed: -6 00:26:47.412 Write completed with error (sct=0, sc=8) 00:26:47.412 starting I/O failed: -6 00:26:47.412 Write completed with error (sct=0, sc=8) 00:26:47.412 starting I/O failed: -6 00:26:47.412 Write completed with error (sct=0, sc=8) 00:26:47.412 Write completed with error (sct=0, sc=8) 00:26:47.412 starting I/O failed: -6 00:26:47.412 Write completed with error (sct=0, sc=8) 00:26:47.412 starting I/O failed: -6 00:26:47.412 Write completed with error (sct=0, sc=8) 00:26:47.412 starting I/O failed: -6 00:26:47.412 Write completed with error (sct=0, sc=8) 00:26:47.412 [2024-11-20 14:45:59.091095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.412 Write completed with error (sct=0, sc=8) 00:26:47.412 starting I/O failed: -6 00:26:47.412 Write completed with error (sct=0, sc=8) 00:26:47.412 starting I/O failed: -6 00:26:47.412 Write completed with error (sct=0, sc=8) 00:26:47.412 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.413 Write completed with error (sct=0, sc=8) 00:26:47.413 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 [2024-11-20 14:45:59.096329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.414 NVMe io qpair process completion error 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 [2024-11-20 14:45:59.097376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.414 starting I/O failed: -6 00:26:47.414 starting I/O failed: -6 00:26:47.414 starting I/O failed: -6 00:26:47.414 starting I/O failed: -6 00:26:47.414 starting I/O failed: -6 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 starting I/O failed: -6 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.414 Write completed with error (sct=0, sc=8) 00:26:47.415 Write completed with error (sct=0, sc=8) 00:26:47.415 starting I/O failed: -6 00:26:47.415 Write completed with error (sct=0, sc=8) 00:26:47.415 starting I/O failed: -6 00:26:47.415 Write completed with error (sct=0, sc=8) 00:26:47.415 Write completed with error (sct=0, sc=8) 00:26:47.415 Write completed with error (sct=0, sc=8) 00:26:47.415 starting I/O failed: -6 00:26:47.415 Write completed with error (sct=0, sc=8) 00:26:47.415 starting I/O failed: -6 00:26:47.415 [2024-11-20 14:45:59.098352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.415 Write completed with error (sct=0, sc=8) 00:26:47.415 Write completed with error (sct=0, sc=8) 00:26:47.415 starting I/O failed: -6 00:26:47.415 Write completed with error (sct=0, sc=8) 00:26:47.415 starting I/O failed: -6 00:26:47.415 Write completed with error (sct=0, sc=8) 00:26:47.415 starting I/O failed: -6 00:26:47.415 Write completed with error (sct=0, sc=8) 00:26:47.415 Write completed with error (sct=0, sc=8) 00:26:47.415 starting I/O failed: -6 00:26:47.415 Write completed with error (sct=0, sc=8) 00:26:47.415 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 [2024-11-20 14:45:59.099458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.416 Write completed with error (sct=0, sc=8) 00:26:47.416 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 Write completed with error (sct=0, sc=8) 00:26:47.417 starting I/O failed: -6 00:26:47.417 [2024-11-20 14:45:59.101942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.417 NVMe io qpair process completion error 00:26:47.417 Initializing NVMe Controllers 00:26:47.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:26:47.417 Controller IO queue size 128, less than required. 00:26:47.417 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:26:47.417 Controller IO queue size 128, less than required. 00:26:47.417 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:26:47.417 Controller IO queue size 128, less than required. 00:26:47.417 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:26:47.417 Controller IO queue size 128, less than required. 00:26:47.417 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:26:47.417 Controller IO queue size 128, less than required. 00:26:47.417 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:26:47.417 Controller IO queue size 128, less than required. 00:26:47.417 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:26:47.417 Controller IO queue size 128, less than required. 00:26:47.417 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:47.417 Controller IO queue size 128, less than required. 00:26:47.417 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:26:47.417 Controller IO queue size 128, less than required. 00:26:47.417 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:26:47.417 Controller IO queue size 128, less than required. 00:26:47.417 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:26:47.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:26:47.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:26:47.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:26:47.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:26:47.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:26:47.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:26:47.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:47.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:26:47.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:26:47.418 Initialization complete. Launching workers. 00:26:47.418 ======================================================== 00:26:47.418 Latency(us) 00:26:47.418 Device Information : IOPS MiB/s Average min max 00:26:47.418 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2131.31 91.58 60061.95 708.63 102493.79 00:26:47.418 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2118.47 91.03 60436.56 763.51 112791.89 00:26:47.418 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2165.82 93.06 59132.55 759.76 105141.29 00:26:47.418 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2161.82 92.89 59305.57 541.96 107910.59 00:26:47.418 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2144.14 92.13 59838.54 694.79 116532.74 00:26:47.419 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2141.83 92.03 59915.22 829.68 107677.65 00:26:47.419 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2151.30 92.44 59719.43 699.06 125020.67 00:26:47.419 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2155.30 92.61 58921.51 529.42 107123.76 00:26:47.419 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2168.97 93.20 59244.37 543.09 107000.74 00:26:47.419 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2172.97 93.37 58441.23 636.33 107552.78 00:26:47.419 ======================================================== 00:26:47.419 Total : 21511.93 924.34 59497.72 529.42 125020.67 00:26:47.419 00:26:47.419 [2024-11-20 14:45:59.104963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222e900 is same with the state(6) to be set 00:26:47.419 [2024-11-20 14:45:59.105011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222c560 is same with the state(6) to be set 00:26:47.419 [2024-11-20 14:45:59.105042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222cbc0 is same with the state(6) to be set 00:26:47.419 [2024-11-20 14:45:59.105076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222d410 is same with the state(6) to be set 00:26:47.419 [2024-11-20 14:45:59.105105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222da70 is same with the state(6) to be set 00:26:47.419 [2024-11-20 14:45:59.105134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222c890 is same with the state(6) to be set 00:26:47.419 [2024-11-20 14:45:59.105163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222cef0 is same with the state(6) to be set 00:26:47.419 [2024-11-20 14:45:59.105192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222e720 is same with the state(6) to be set 00:26:47.419 [2024-11-20 14:45:59.105220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222d740 is same with the state(6) to be set 00:26:47.419 [2024-11-20 14:45:59.105249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eae0 is same with the state(6) to be set 00:26:47.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:47.685 14:45:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1657626 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1657626 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1657626 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:48.624 rmmod nvme_tcp 00:26:48.624 rmmod nvme_fabrics 00:26:48.624 rmmod nvme_keyring 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1657358 ']' 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1657358 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1657358 ']' 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1657358 00:26:48.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1657358) - No such process 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1657358 is not found' 00:26:48.624 Process with pid 1657358 is not found 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.624 14:46:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:51.163 00:26:51.163 real 0m9.801s 00:26:51.163 user 0m24.865s 00:26:51.163 sys 0m5.260s 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:51.163 ************************************ 00:26:51.163 END TEST nvmf_shutdown_tc4 00:26:51.163 ************************************ 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:26:51.163 00:26:51.163 real 0m40.951s 00:26:51.163 user 1m41.261s 00:26:51.163 sys 0m14.047s 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:51.163 ************************************ 00:26:51.163 END TEST nvmf_shutdown 00:26:51.163 ************************************ 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:51.163 ************************************ 00:26:51.163 START TEST nvmf_nsid 00:26:51.163 ************************************ 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:51.163 * Looking for test storage... 00:26:51.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:26:51.163 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:51.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.164 --rc genhtml_branch_coverage=1 00:26:51.164 --rc genhtml_function_coverage=1 00:26:51.164 --rc genhtml_legend=1 00:26:51.164 --rc geninfo_all_blocks=1 00:26:51.164 --rc geninfo_unexecuted_blocks=1 00:26:51.164 00:26:51.164 ' 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:51.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.164 --rc genhtml_branch_coverage=1 00:26:51.164 --rc genhtml_function_coverage=1 00:26:51.164 --rc genhtml_legend=1 00:26:51.164 --rc geninfo_all_blocks=1 00:26:51.164 --rc geninfo_unexecuted_blocks=1 00:26:51.164 00:26:51.164 ' 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:51.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.164 --rc genhtml_branch_coverage=1 00:26:51.164 --rc genhtml_function_coverage=1 00:26:51.164 --rc genhtml_legend=1 00:26:51.164 --rc geninfo_all_blocks=1 00:26:51.164 --rc geninfo_unexecuted_blocks=1 00:26:51.164 00:26:51.164 ' 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:51.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.164 --rc genhtml_branch_coverage=1 00:26:51.164 --rc genhtml_function_coverage=1 00:26:51.164 --rc genhtml_legend=1 00:26:51.164 --rc geninfo_all_blocks=1 00:26:51.164 --rc geninfo_unexecuted_blocks=1 00:26:51.164 00:26:51.164 ' 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:26:51.164 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:51.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:26:51.165 14:46:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:57.739 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:57.740 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:57.740 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:57.740 Found net devices under 0000:86:00.0: cvl_0_0 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:57.740 Found net devices under 0000:86:00.1: cvl_0_1 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:57.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:57.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:26:57.740 00:26:57.740 --- 10.0.0.2 ping statistics --- 00:26:57.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.740 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:57.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:57.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:26:57.740 00:26:57.740 --- 10.0.0.1 ping statistics --- 00:26:57.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.740 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1662042 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1662042 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1662042 ']' 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:57.740 14:46:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:57.740 [2024-11-20 14:46:08.845376] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:26:57.740 [2024-11-20 14:46:08.845428] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:57.740 [2024-11-20 14:46:08.924579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.740 [2024-11-20 14:46:08.967412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:57.740 [2024-11-20 14:46:08.967450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:57.740 [2024-11-20 14:46:08.967458] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:57.740 [2024-11-20 14:46:08.967465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:57.740 [2024-11-20 14:46:08.967470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:57.740 [2024-11-20 14:46:08.968049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1662115 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=08b29ace-5585-4b7d-8caa-07dd68825d78 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=a253e7d9-045c-4598-992a-c85dc891c358 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=7b265da3-7c4f-424e-89aa-7bd3da249018 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:57.741 null0 00:26:57.741 null1 00:26:57.741 [2024-11-20 14:46:09.162790] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:26:57.741 [2024-11-20 14:46:09.162835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662115 ] 00:26:57.741 null2 00:26:57.741 [2024-11-20 14:46:09.169961] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.741 [2024-11-20 14:46:09.194166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.741 [2024-11-20 14:46:09.222416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1662115 /var/tmp/tgt2.sock 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1662115 ']' 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:26:57.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:57.741 [2024-11-20 14:46:09.266529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:26:57.741 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:26:58.000 [2024-11-20 14:46:09.797062] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.000 [2024-11-20 14:46:09.813171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:26:58.000 nvme0n1 nvme0n2 00:26:58.000 nvme1n1 00:26:58.000 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:26:58.000 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:26:58.000 14:46:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:59.378 14:46:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:26:59.378 14:46:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:26:59.378 14:46:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:26:59.378 14:46:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:26:59.378 14:46:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:26:59.378 14:46:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:26:59.378 14:46:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:26:59.378 14:46:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:59.378 14:46:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:59.378 14:46:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:26:59.378 14:46:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:26:59.378 14:46:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:26:59.378 14:46:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:27:00.314 14:46:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:00.314 14:46:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:27:00.314 14:46:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:00.314 14:46:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:27:00.314 14:46:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:27:00.314 14:46:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 08b29ace-5585-4b7d-8caa-07dd68825d78 00:27:00.314 14:46:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:00.314 14:46:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:27:00.314 14:46:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:27:00.314 14:46:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:27:00.314 14:46:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=08b29ace55854b7d8caa07dd68825d78 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 08B29ACE55854B7D8CAA07DD68825D78 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 08B29ACE55854B7D8CAA07DD68825D78 == \0\8\B\2\9\A\C\E\5\5\8\5\4\B\7\D\8\C\A\A\0\7\D\D\6\8\8\2\5\D\7\8 ]] 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid a253e7d9-045c-4598-992a-c85dc891c358 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a253e7d9045c4598992ac85dc891c358 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A253E7D9045C4598992AC85DC891C358 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ A253E7D9045C4598992AC85DC891C358 == \A\2\5\3\E\7\D\9\0\4\5\C\4\5\9\8\9\9\2\A\C\8\5\D\C\8\9\1\C\3\5\8 ]] 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 7b265da3-7c4f-424e-89aa-7bd3da249018 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7b265da37c4f424e89aa7bd3da249018 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7B265DA37C4F424E89AA7BD3DA249018 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 7B265DA37C4F424E89AA7BD3DA249018 == \7\B\2\6\5\D\A\3\7\C\4\F\4\2\4\E\8\9\A\A\7\B\D\3\D\A\2\4\9\0\1\8 ]] 00:27:00.314 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:27:00.574 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:27:00.574 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:27:00.574 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1662115 00:27:00.574 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1662115 ']' 00:27:00.574 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1662115 00:27:00.574 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:27:00.574 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:00.574 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1662115 00:27:00.574 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:00.574 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:00.574 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1662115' 00:27:00.574 killing process with pid 1662115 00:27:00.574 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1662115 00:27:00.574 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1662115 00:27:00.833 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:27:00.833 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:00.833 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:27:00.833 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:00.833 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:27:00.833 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:00.833 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:00.833 rmmod nvme_tcp 00:27:00.833 rmmod nvme_fabrics 00:27:00.833 rmmod nvme_keyring 00:27:00.833 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:00.833 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:27:00.833 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:27:00.833 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1662042 ']' 00:27:00.833 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1662042 00:27:00.833 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1662042 ']' 00:27:00.833 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1662042 00:27:00.833 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:27:00.833 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:01.092 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1662042 00:27:01.092 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:01.092 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:01.092 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1662042' 00:27:01.092 killing process with pid 1662042 00:27:01.092 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1662042 00:27:01.092 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1662042 00:27:01.092 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:01.092 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:01.092 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:01.092 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:27:01.092 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:27:01.092 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:01.092 14:46:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:27:01.092 14:46:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:01.092 14:46:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:01.092 14:46:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.093 14:46:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:01.093 14:46:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.631 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:03.631 00:27:03.631 real 0m12.384s 00:27:03.631 user 0m9.717s 00:27:03.631 sys 0m5.486s 00:27:03.631 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:03.631 14:46:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:03.631 ************************************ 00:27:03.631 END TEST nvmf_nsid 00:27:03.631 ************************************ 00:27:03.631 14:46:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:03.631 00:27:03.631 real 12m2.118s 00:27:03.631 user 25m52.124s 00:27:03.631 sys 3m40.087s 00:27:03.631 14:46:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:03.631 14:46:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:03.631 ************************************ 00:27:03.631 END TEST nvmf_target_extra 00:27:03.631 ************************************ 00:27:03.631 14:46:15 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:03.631 14:46:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:03.631 14:46:15 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:03.631 14:46:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:03.631 ************************************ 00:27:03.631 START TEST nvmf_host 00:27:03.631 ************************************ 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:03.631 * Looking for test storage... 00:27:03.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:03.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.631 --rc genhtml_branch_coverage=1 00:27:03.631 --rc genhtml_function_coverage=1 00:27:03.631 --rc genhtml_legend=1 00:27:03.631 --rc geninfo_all_blocks=1 00:27:03.631 --rc geninfo_unexecuted_blocks=1 00:27:03.631 00:27:03.631 ' 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:03.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.631 --rc genhtml_branch_coverage=1 00:27:03.631 --rc genhtml_function_coverage=1 00:27:03.631 --rc genhtml_legend=1 00:27:03.631 --rc geninfo_all_blocks=1 00:27:03.631 --rc geninfo_unexecuted_blocks=1 00:27:03.631 00:27:03.631 ' 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:03.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.631 --rc genhtml_branch_coverage=1 00:27:03.631 --rc genhtml_function_coverage=1 00:27:03.631 --rc genhtml_legend=1 00:27:03.631 --rc geninfo_all_blocks=1 00:27:03.631 --rc geninfo_unexecuted_blocks=1 00:27:03.631 00:27:03.631 ' 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:03.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.631 --rc genhtml_branch_coverage=1 00:27:03.631 --rc genhtml_function_coverage=1 00:27:03.631 --rc genhtml_legend=1 00:27:03.631 --rc geninfo_all_blocks=1 00:27:03.631 --rc geninfo_unexecuted_blocks=1 00:27:03.631 00:27:03.631 ' 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.631 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:03.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.632 ************************************ 00:27:03.632 START TEST nvmf_multicontroller 00:27:03.632 ************************************ 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:03.632 * Looking for test storage... 00:27:03.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:03.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.632 --rc genhtml_branch_coverage=1 00:27:03.632 --rc genhtml_function_coverage=1 00:27:03.632 --rc genhtml_legend=1 00:27:03.632 --rc geninfo_all_blocks=1 00:27:03.632 --rc geninfo_unexecuted_blocks=1 00:27:03.632 00:27:03.632 ' 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:03.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.632 --rc genhtml_branch_coverage=1 00:27:03.632 --rc genhtml_function_coverage=1 00:27:03.632 --rc genhtml_legend=1 00:27:03.632 --rc geninfo_all_blocks=1 00:27:03.632 --rc geninfo_unexecuted_blocks=1 00:27:03.632 00:27:03.632 ' 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:03.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.632 --rc genhtml_branch_coverage=1 00:27:03.632 --rc genhtml_function_coverage=1 00:27:03.632 --rc genhtml_legend=1 00:27:03.632 --rc geninfo_all_blocks=1 00:27:03.632 --rc geninfo_unexecuted_blocks=1 00:27:03.632 00:27:03.632 ' 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:03.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.632 --rc genhtml_branch_coverage=1 00:27:03.632 --rc genhtml_function_coverage=1 00:27:03.632 --rc genhtml_legend=1 00:27:03.632 --rc geninfo_all_blocks=1 00:27:03.632 --rc geninfo_unexecuted_blocks=1 00:27:03.632 00:27:03.632 ' 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.632 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:03.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:27:03.633 14:46:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.206 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:10.206 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:10.207 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:10.207 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:10.207 Found net devices under 0000:86:00.0: cvl_0_0 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:10.207 Found net devices under 0000:86:00.1: cvl_0_1 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:10.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:27:10.207 00:27:10.207 --- 10.0.0.2 ping statistics --- 00:27:10.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.207 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:27:10.207 00:27:10.207 --- 10.0.0.1 ping statistics --- 00:27:10.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.207 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:10.207 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1666336 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1666336 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1666336 ']' 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.208 [2024-11-20 14:46:21.527236] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:27:10.208 [2024-11-20 14:46:21.527283] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.208 [2024-11-20 14:46:21.604282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:10.208 [2024-11-20 14:46:21.646826] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.208 [2024-11-20 14:46:21.646865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.208 [2024-11-20 14:46:21.646872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.208 [2024-11-20 14:46:21.646877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.208 [2024-11-20 14:46:21.646882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.208 [2024-11-20 14:46:21.648254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.208 [2024-11-20 14:46:21.648361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.208 [2024-11-20 14:46:21.648362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.208 [2024-11-20 14:46:21.786250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.208 Malloc0 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.208 [2024-11-20 14:46:21.852504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.208 [2024-11-20 14:46:21.860418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.208 Malloc1 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1666357 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1666357 /var/tmp/bdevperf.sock 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1666357 ']' 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:10.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.208 14:46:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.469 NVMe0n1 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.469 1 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.469 request: 00:27:10.469 { 00:27:10.469 "name": "NVMe0", 00:27:10.469 "trtype": "tcp", 00:27:10.469 "traddr": "10.0.0.2", 00:27:10.469 "adrfam": "ipv4", 00:27:10.469 "trsvcid": "4420", 00:27:10.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:10.469 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:10.469 "hostaddr": "10.0.0.1", 00:27:10.469 "prchk_reftag": false, 00:27:10.469 "prchk_guard": false, 00:27:10.469 "hdgst": false, 00:27:10.469 "ddgst": false, 00:27:10.469 "allow_unrecognized_csi": false, 00:27:10.469 "method": "bdev_nvme_attach_controller", 00:27:10.469 "req_id": 1 00:27:10.469 } 00:27:10.469 Got JSON-RPC error response 00:27:10.469 response: 00:27:10.469 { 00:27:10.469 "code": -114, 00:27:10.469 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:10.469 } 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.469 request: 00:27:10.469 { 00:27:10.469 "name": "NVMe0", 00:27:10.469 "trtype": "tcp", 00:27:10.469 "traddr": "10.0.0.2", 00:27:10.469 "adrfam": "ipv4", 00:27:10.469 "trsvcid": "4420", 00:27:10.469 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:10.469 "hostaddr": "10.0.0.1", 00:27:10.469 "prchk_reftag": false, 00:27:10.469 "prchk_guard": false, 00:27:10.469 "hdgst": false, 00:27:10.469 "ddgst": false, 00:27:10.469 "allow_unrecognized_csi": false, 00:27:10.469 "method": "bdev_nvme_attach_controller", 00:27:10.469 "req_id": 1 00:27:10.469 } 00:27:10.469 Got JSON-RPC error response 00:27:10.469 response: 00:27:10.469 { 00:27:10.469 "code": -114, 00:27:10.469 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:10.469 } 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.469 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.469 request: 00:27:10.469 { 00:27:10.469 "name": "NVMe0", 00:27:10.469 "trtype": "tcp", 00:27:10.469 "traddr": "10.0.0.2", 00:27:10.469 "adrfam": "ipv4", 00:27:10.469 "trsvcid": "4420", 00:27:10.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:10.469 "hostaddr": "10.0.0.1", 00:27:10.469 "prchk_reftag": false, 00:27:10.469 "prchk_guard": false, 00:27:10.469 "hdgst": false, 00:27:10.469 "ddgst": false, 00:27:10.469 "multipath": "disable", 00:27:10.469 "allow_unrecognized_csi": false, 00:27:10.469 "method": "bdev_nvme_attach_controller", 00:27:10.469 "req_id": 1 00:27:10.470 } 00:27:10.470 Got JSON-RPC error response 00:27:10.470 response: 00:27:10.470 { 00:27:10.470 "code": -114, 00:27:10.470 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:27:10.470 } 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.470 request: 00:27:10.470 { 00:27:10.470 "name": "NVMe0", 00:27:10.470 "trtype": "tcp", 00:27:10.470 "traddr": "10.0.0.2", 00:27:10.470 "adrfam": "ipv4", 00:27:10.470 "trsvcid": "4420", 00:27:10.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:10.470 "hostaddr": "10.0.0.1", 00:27:10.470 "prchk_reftag": false, 00:27:10.470 "prchk_guard": false, 00:27:10.470 "hdgst": false, 00:27:10.470 "ddgst": false, 00:27:10.470 "multipath": "failover", 00:27:10.470 "allow_unrecognized_csi": false, 00:27:10.470 "method": "bdev_nvme_attach_controller", 00:27:10.470 "req_id": 1 00:27:10.470 } 00:27:10.470 Got JSON-RPC error response 00:27:10.470 response: 00:27:10.470 { 00:27:10.470 "code": -114, 00:27:10.470 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:10.470 } 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.470 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.729 NVMe0n1 00:27:10.729 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.729 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:10.729 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.729 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.729 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.729 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:27:10.729 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.729 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.987 00:27:10.987 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.987 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:10.987 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:10.987 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.987 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:10.987 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.987 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:10.987 14:46:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:11.923 { 00:27:11.923 "results": [ 00:27:11.923 { 00:27:11.923 "job": "NVMe0n1", 00:27:11.923 "core_mask": "0x1", 00:27:11.923 "workload": "write", 00:27:11.923 "status": "finished", 00:27:11.923 "queue_depth": 128, 00:27:11.923 "io_size": 4096, 00:27:11.923 "runtime": 1.005125, 00:27:11.923 "iops": 24081.581892799404, 00:27:11.923 "mibps": 94.06867926874767, 00:27:11.923 "io_failed": 0, 00:27:11.923 "io_timeout": 0, 00:27:11.923 "avg_latency_us": 5303.666622095686, 00:27:11.923 "min_latency_us": 3148.5773913043477, 00:27:11.923 "max_latency_us": 11340.577391304349 00:27:11.923 } 00:27:11.923 ], 00:27:11.923 "core_count": 1 00:27:11.923 } 00:27:12.182 14:46:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:12.182 14:46:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.182 14:46:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:12.182 14:46:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.182 14:46:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:27:12.182 14:46:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1666357 00:27:12.182 14:46:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1666357 ']' 00:27:12.182 14:46:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1666357 00:27:12.182 14:46:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:27:12.182 14:46:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:12.182 14:46:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1666357 00:27:12.183 14:46:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:12.183 14:46:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:12.183 14:46:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1666357' 00:27:12.183 killing process with pid 1666357 00:27:12.183 14:46:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1666357 00:27:12.183 14:46:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1666357 00:27:12.183 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:12.183 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.183 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:12.183 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.183 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:12.183 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.183 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:12.183 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.183 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:27:12.183 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:12.183 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:27:12.183 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:12.183 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:27:12.442 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:12.442 [2024-11-20 14:46:21.963250] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:27:12.442 [2024-11-20 14:46:21.963302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666357 ] 00:27:12.442 [2024-11-20 14:46:22.039379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.442 [2024-11-20 14:46:22.082241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.442 [2024-11-20 14:46:22.738062] bdev.c:4945:bdev_name_add: *ERROR*: Bdev name 7af3da4c-9262-4881-b333-5e24a85fe512 already exists 00:27:12.442 [2024-11-20 14:46:22.738090] bdev.c:8165:bdev_register: *ERROR*: Unable to add uuid:7af3da4c-9262-4881-b333-5e24a85fe512 alias for bdev NVMe1n1 00:27:12.442 [2024-11-20 14:46:22.738098] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:12.442 Running I/O for 1 seconds... 00:27:12.442 24013.00 IOPS, 93.80 MiB/s 00:27:12.442 Latency(us) 00:27:12.442 [2024-11-20T13:46:24.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.442 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:12.442 NVMe0n1 : 1.01 24081.58 94.07 0.00 0.00 5303.67 3148.58 11340.58 00:27:12.442 [2024-11-20T13:46:24.400Z] =================================================================================================================== 00:27:12.442 [2024-11-20T13:46:24.400Z] Total : 24081.58 94.07 0.00 0.00 5303.67 3148.58 11340.58 00:27:12.442 Received shutdown signal, test time was about 1.000000 seconds 00:27:12.442 00:27:12.442 Latency(us) 00:27:12.442 [2024-11-20T13:46:24.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.442 [2024-11-20T13:46:24.400Z] =================================================================================================================== 00:27:12.442 [2024-11-20T13:46:24.400Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:12.442 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:12.442 rmmod nvme_tcp 00:27:12.442 rmmod nvme_fabrics 00:27:12.442 rmmod nvme_keyring 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1666336 ']' 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1666336 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1666336 ']' 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1666336 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1666336 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1666336' 00:27:12.442 killing process with pid 1666336 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1666336 00:27:12.442 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1666336 00:27:12.702 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:12.702 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:12.702 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:12.702 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:27:12.702 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:27:12.702 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:12.702 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:27:12.702 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:12.702 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:12.702 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.702 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.702 14:46:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.607 14:46:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:14.607 00:27:14.607 real 0m11.193s 00:27:14.607 user 0m12.492s 00:27:14.607 sys 0m5.155s 00:27:14.607 14:46:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:14.607 14:46:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:14.607 ************************************ 00:27:14.607 END TEST nvmf_multicontroller 00:27:14.607 ************************************ 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.867 ************************************ 00:27:14.867 START TEST nvmf_aer 00:27:14.867 ************************************ 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:14.867 * Looking for test storage... 00:27:14.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:14.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.867 --rc genhtml_branch_coverage=1 00:27:14.867 --rc genhtml_function_coverage=1 00:27:14.867 --rc genhtml_legend=1 00:27:14.867 --rc geninfo_all_blocks=1 00:27:14.867 --rc geninfo_unexecuted_blocks=1 00:27:14.867 00:27:14.867 ' 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:14.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.867 --rc genhtml_branch_coverage=1 00:27:14.867 --rc genhtml_function_coverage=1 00:27:14.867 --rc genhtml_legend=1 00:27:14.867 --rc geninfo_all_blocks=1 00:27:14.867 --rc geninfo_unexecuted_blocks=1 00:27:14.867 00:27:14.867 ' 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:14.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.867 --rc genhtml_branch_coverage=1 00:27:14.867 --rc genhtml_function_coverage=1 00:27:14.867 --rc genhtml_legend=1 00:27:14.867 --rc geninfo_all_blocks=1 00:27:14.867 --rc geninfo_unexecuted_blocks=1 00:27:14.867 00:27:14.867 ' 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:14.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.867 --rc genhtml_branch_coverage=1 00:27:14.867 --rc genhtml_function_coverage=1 00:27:14.867 --rc genhtml_legend=1 00:27:14.867 --rc geninfo_all_blocks=1 00:27:14.867 --rc geninfo_unexecuted_blocks=1 00:27:14.867 00:27:14.867 ' 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.867 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:14.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:27:14.868 14:46:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:21.451 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:21.451 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:21.451 Found net devices under 0000:86:00.0: cvl_0_0 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.451 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:21.451 Found net devices under 0000:86:00.1: cvl_0_1 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:21.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:27:21.452 00:27:21.452 --- 10.0.0.2 ping statistics --- 00:27:21.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.452 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:21.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:27:21.452 00:27:21.452 --- 10.0.0.1 ping statistics --- 00:27:21.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.452 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1670319 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1670319 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1670319 ']' 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:21.452 14:46:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:21.452 [2024-11-20 14:46:32.748466] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:27:21.452 [2024-11-20 14:46:32.748512] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.452 [2024-11-20 14:46:32.828158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:21.452 [2024-11-20 14:46:32.871032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.452 [2024-11-20 14:46:32.871071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.452 [2024-11-20 14:46:32.871078] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:21.452 [2024-11-20 14:46:32.871084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:21.452 [2024-11-20 14:46:32.871089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.452 [2024-11-20 14:46:32.872702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.452 [2024-11-20 14:46:32.872812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:21.452 [2024-11-20 14:46:32.872918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.452 [2024-11-20 14:46:32.872919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:21.711 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:21.711 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:27:21.711 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:21.711 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:21.711 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:21.712 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.712 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:21.712 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.712 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:21.712 [2024-11-20 14:46:33.640696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.712 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.712 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:21.712 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.712 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:21.971 Malloc0 00:27:21.971 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.971 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:21.971 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.971 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:21.971 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.971 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:21.971 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.971 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:21.971 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.971 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:21.971 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.971 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:21.971 [2024-11-20 14:46:33.698776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.971 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.971 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:21.971 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.971 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:21.971 [ 00:27:21.971 { 00:27:21.971 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:21.971 "subtype": "Discovery", 00:27:21.971 "listen_addresses": [], 00:27:21.971 "allow_any_host": true, 00:27:21.971 "hosts": [] 00:27:21.971 }, 00:27:21.971 { 00:27:21.971 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:21.971 "subtype": "NVMe", 00:27:21.971 "listen_addresses": [ 00:27:21.971 { 00:27:21.971 "trtype": "TCP", 00:27:21.972 "adrfam": "IPv4", 00:27:21.972 "traddr": "10.0.0.2", 00:27:21.972 "trsvcid": "4420" 00:27:21.972 } 00:27:21.972 ], 00:27:21.972 "allow_any_host": true, 00:27:21.972 "hosts": [], 00:27:21.972 "serial_number": "SPDK00000000000001", 00:27:21.972 "model_number": "SPDK bdev Controller", 00:27:21.972 "max_namespaces": 2, 00:27:21.972 "min_cntlid": 1, 00:27:21.972 "max_cntlid": 65519, 00:27:21.972 "namespaces": [ 00:27:21.972 { 00:27:21.972 "nsid": 1, 00:27:21.972 "bdev_name": "Malloc0", 00:27:21.972 "name": "Malloc0", 00:27:21.972 "nguid": "CB464BA94CE04E3291BECAC60AEEC1E8", 00:27:21.972 "uuid": "cb464ba9-4ce0-4e32-91be-cac60aeec1e8" 00:27:21.972 } 00:27:21.972 ] 00:27:21.972 } 00:27:21.972 ] 00:27:21.972 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.972 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:21.972 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:21.972 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1670503 00:27:21.972 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:21.972 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:21.972 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:27:21.972 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:21.972 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:27:21.972 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:27:21.972 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:27:21.972 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:21.972 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:27:21.972 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:27:21.972 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:27:22.231 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:22.231 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:22.231 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:27:22.231 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:22.231 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.231 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:22.231 Malloc1 00:27:22.231 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.231 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:22.231 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.231 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:22.231 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.231 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:22.231 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.231 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:22.231 Asynchronous Event Request test 00:27:22.231 Attaching to 10.0.0.2 00:27:22.231 Attached to 10.0.0.2 00:27:22.231 Registering asynchronous event callbacks... 00:27:22.231 Starting namespace attribute notice tests for all controllers... 00:27:22.231 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:22.231 aer_cb - Changed Namespace 00:27:22.231 Cleaning up... 00:27:22.231 [ 00:27:22.232 { 00:27:22.232 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:22.232 "subtype": "Discovery", 00:27:22.232 "listen_addresses": [], 00:27:22.232 "allow_any_host": true, 00:27:22.232 "hosts": [] 00:27:22.232 }, 00:27:22.232 { 00:27:22.232 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:22.232 "subtype": "NVMe", 00:27:22.232 "listen_addresses": [ 00:27:22.232 { 00:27:22.232 "trtype": "TCP", 00:27:22.232 "adrfam": "IPv4", 00:27:22.232 "traddr": "10.0.0.2", 00:27:22.232 "trsvcid": "4420" 00:27:22.232 } 00:27:22.232 ], 00:27:22.232 "allow_any_host": true, 00:27:22.232 "hosts": [], 00:27:22.232 "serial_number": "SPDK00000000000001", 00:27:22.232 "model_number": "SPDK bdev Controller", 00:27:22.232 "max_namespaces": 2, 00:27:22.232 "min_cntlid": 1, 00:27:22.232 "max_cntlid": 65519, 00:27:22.232 "namespaces": [ 00:27:22.232 { 00:27:22.232 "nsid": 1, 00:27:22.232 "bdev_name": "Malloc0", 00:27:22.232 "name": "Malloc0", 00:27:22.232 "nguid": "CB464BA94CE04E3291BECAC60AEEC1E8", 00:27:22.232 "uuid": "cb464ba9-4ce0-4e32-91be-cac60aeec1e8" 00:27:22.232 }, 00:27:22.232 { 00:27:22.232 "nsid": 2, 00:27:22.232 "bdev_name": "Malloc1", 00:27:22.232 "name": "Malloc1", 00:27:22.232 "nguid": "FA0CC87E92404C47BBAB9956B3AF34E4", 00:27:22.232 "uuid": "fa0cc87e-9240-4c47-bbab-9956b3af34e4" 00:27:22.232 } 00:27:22.232 ] 00:27:22.232 } 00:27:22.232 ] 00:27:22.232 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.232 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1670503 00:27:22.232 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:22.232 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.232 14:46:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:22.232 rmmod nvme_tcp 00:27:22.232 rmmod nvme_fabrics 00:27:22.232 rmmod nvme_keyring 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1670319 ']' 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1670319 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1670319 ']' 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1670319 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1670319 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1670319' 00:27:22.232 killing process with pid 1670319 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1670319 00:27:22.232 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1670319 00:27:22.491 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:22.491 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:22.491 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:22.491 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:27:22.491 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:22.491 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:27:22.491 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:27:22.491 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:22.491 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:22.491 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.491 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:22.491 14:46:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:25.031 00:27:25.031 real 0m9.821s 00:27:25.031 user 0m7.805s 00:27:25.031 sys 0m4.840s 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.031 ************************************ 00:27:25.031 END TEST nvmf_aer 00:27:25.031 ************************************ 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.031 ************************************ 00:27:25.031 START TEST nvmf_async_init 00:27:25.031 ************************************ 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:25.031 * Looking for test storage... 00:27:25.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:25.031 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:25.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.032 --rc genhtml_branch_coverage=1 00:27:25.032 --rc genhtml_function_coverage=1 00:27:25.032 --rc genhtml_legend=1 00:27:25.032 --rc geninfo_all_blocks=1 00:27:25.032 --rc geninfo_unexecuted_blocks=1 00:27:25.032 00:27:25.032 ' 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:25.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.032 --rc genhtml_branch_coverage=1 00:27:25.032 --rc genhtml_function_coverage=1 00:27:25.032 --rc genhtml_legend=1 00:27:25.032 --rc geninfo_all_blocks=1 00:27:25.032 --rc geninfo_unexecuted_blocks=1 00:27:25.032 00:27:25.032 ' 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:25.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.032 --rc genhtml_branch_coverage=1 00:27:25.032 --rc genhtml_function_coverage=1 00:27:25.032 --rc genhtml_legend=1 00:27:25.032 --rc geninfo_all_blocks=1 00:27:25.032 --rc geninfo_unexecuted_blocks=1 00:27:25.032 00:27:25.032 ' 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:25.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.032 --rc genhtml_branch_coverage=1 00:27:25.032 --rc genhtml_function_coverage=1 00:27:25.032 --rc genhtml_legend=1 00:27:25.032 --rc geninfo_all_blocks=1 00:27:25.032 --rc geninfo_unexecuted_blocks=1 00:27:25.032 00:27:25.032 ' 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:25.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3891a4985b034cb28283bd747ef4fca2 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:27:25.032 14:46:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:31.693 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:31.693 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.693 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:31.694 Found net devices under 0000:86:00.0: cvl_0_0 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:31.694 Found net devices under 0000:86:00.1: cvl_0_1 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:31.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:31.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:27:31.694 00:27:31.694 --- 10.0.0.2 ping statistics --- 00:27:31.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.694 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:31.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:31.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:27:31.694 00:27:31.694 --- 10.0.0.1 ping statistics --- 00:27:31.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.694 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1674063 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1674063 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1674063 ']' 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:31.694 14:46:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.694 [2024-11-20 14:46:42.657602] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:27:31.694 [2024-11-20 14:46:42.657645] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.694 [2024-11-20 14:46:42.736494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.694 [2024-11-20 14:46:42.778576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.694 [2024-11-20 14:46:42.778612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.694 [2024-11-20 14:46:42.778619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.694 [2024-11-20 14:46:42.778625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.694 [2024-11-20 14:46:42.778630] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.694 [2024-11-20 14:46:42.779175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.694 [2024-11-20 14:46:43.534120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.694 null0 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3891a4985b034cb28283bd747ef4fca2 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.694 [2024-11-20 14:46:43.586391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.694 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.997 nvme0n1 00:27:31.997 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.997 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:31.997 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.997 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.997 [ 00:27:31.997 { 00:27:31.997 "name": "nvme0n1", 00:27:31.997 "aliases": [ 00:27:31.997 "3891a498-5b03-4cb2-8283-bd747ef4fca2" 00:27:31.997 ], 00:27:31.997 "product_name": "NVMe disk", 00:27:31.997 "block_size": 512, 00:27:31.997 "num_blocks": 2097152, 00:27:31.997 "uuid": "3891a498-5b03-4cb2-8283-bd747ef4fca2", 00:27:31.997 "numa_id": 1, 00:27:31.997 "assigned_rate_limits": { 00:27:31.997 "rw_ios_per_sec": 0, 00:27:31.997 "rw_mbytes_per_sec": 0, 00:27:31.997 "r_mbytes_per_sec": 0, 00:27:31.997 "w_mbytes_per_sec": 0 00:27:31.997 }, 00:27:31.997 "claimed": false, 00:27:31.997 "zoned": false, 00:27:31.997 "supported_io_types": { 00:27:31.997 "read": true, 00:27:31.997 "write": true, 00:27:31.997 "unmap": false, 00:27:31.997 "flush": true, 00:27:31.997 "reset": true, 00:27:31.997 "nvme_admin": true, 00:27:31.997 "nvme_io": true, 00:27:31.997 "nvme_io_md": false, 00:27:31.997 "write_zeroes": true, 00:27:31.997 "zcopy": false, 00:27:31.997 "get_zone_info": false, 00:27:31.997 "zone_management": false, 00:27:31.997 "zone_append": false, 00:27:31.997 "compare": true, 00:27:31.997 "compare_and_write": true, 00:27:31.997 "abort": true, 00:27:31.997 "seek_hole": false, 00:27:31.997 "seek_data": false, 00:27:31.997 "copy": true, 00:27:31.997 "nvme_iov_md": false 00:27:31.997 }, 00:27:31.997 "memory_domains": [ 00:27:31.997 { 00:27:31.997 "dma_device_id": "system", 00:27:31.997 "dma_device_type": 1 00:27:31.997 } 00:27:31.997 ], 00:27:31.997 "driver_specific": { 00:27:31.997 "nvme": [ 00:27:31.997 { 00:27:31.997 "trid": { 00:27:31.997 "trtype": "TCP", 00:27:31.997 "adrfam": "IPv4", 00:27:31.997 "traddr": "10.0.0.2", 00:27:31.997 "trsvcid": "4420", 00:27:31.997 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:31.997 }, 00:27:31.997 "ctrlr_data": { 00:27:31.997 "cntlid": 1, 00:27:31.997 "vendor_id": "0x8086", 00:27:31.997 "model_number": "SPDK bdev Controller", 00:27:31.997 "serial_number": "00000000000000000000", 00:27:31.997 "firmware_revision": "25.01", 00:27:31.997 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:31.997 "oacs": { 00:27:31.997 "security": 0, 00:27:31.997 "format": 0, 00:27:31.997 "firmware": 0, 00:27:31.997 "ns_manage": 0 00:27:31.997 }, 00:27:31.997 "multi_ctrlr": true, 00:27:31.997 "ana_reporting": false 00:27:31.997 }, 00:27:31.997 "vs": { 00:27:31.997 "nvme_version": "1.3" 00:27:31.997 }, 00:27:31.997 "ns_data": { 00:27:31.997 "id": 1, 00:27:31.997 "can_share": true 00:27:31.997 } 00:27:31.997 } 00:27:31.997 ], 00:27:31.997 "mp_policy": "active_passive" 00:27:31.997 } 00:27:31.997 } 00:27:31.997 ] 00:27:31.997 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.997 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:31.997 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.997 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:31.997 [2024-11-20 14:46:43.850913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:31.997 [2024-11-20 14:46:43.850980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81be10 (9): Bad file descriptor 00:27:32.257 [2024-11-20 14:46:43.983038] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:27:32.257 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.257 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:32.257 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.257 14:46:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.257 [ 00:27:32.257 { 00:27:32.257 "name": "nvme0n1", 00:27:32.257 "aliases": [ 00:27:32.257 "3891a498-5b03-4cb2-8283-bd747ef4fca2" 00:27:32.257 ], 00:27:32.257 "product_name": "NVMe disk", 00:27:32.257 "block_size": 512, 00:27:32.257 "num_blocks": 2097152, 00:27:32.257 "uuid": "3891a498-5b03-4cb2-8283-bd747ef4fca2", 00:27:32.257 "numa_id": 1, 00:27:32.257 "assigned_rate_limits": { 00:27:32.257 "rw_ios_per_sec": 0, 00:27:32.257 "rw_mbytes_per_sec": 0, 00:27:32.257 "r_mbytes_per_sec": 0, 00:27:32.257 "w_mbytes_per_sec": 0 00:27:32.257 }, 00:27:32.257 "claimed": false, 00:27:32.257 "zoned": false, 00:27:32.257 "supported_io_types": { 00:27:32.257 "read": true, 00:27:32.257 "write": true, 00:27:32.257 "unmap": false, 00:27:32.257 "flush": true, 00:27:32.257 "reset": true, 00:27:32.257 "nvme_admin": true, 00:27:32.257 "nvme_io": true, 00:27:32.257 "nvme_io_md": false, 00:27:32.257 "write_zeroes": true, 00:27:32.257 "zcopy": false, 00:27:32.257 "get_zone_info": false, 00:27:32.257 "zone_management": false, 00:27:32.257 "zone_append": false, 00:27:32.257 "compare": true, 00:27:32.257 "compare_and_write": true, 00:27:32.257 "abort": true, 00:27:32.257 "seek_hole": false, 00:27:32.257 "seek_data": false, 00:27:32.257 "copy": true, 00:27:32.257 "nvme_iov_md": false 00:27:32.257 }, 00:27:32.257 "memory_domains": [ 00:27:32.257 { 00:27:32.257 "dma_device_id": "system", 00:27:32.257 "dma_device_type": 1 00:27:32.257 } 00:27:32.257 ], 00:27:32.257 "driver_specific": { 00:27:32.257 "nvme": [ 00:27:32.257 { 00:27:32.257 "trid": { 00:27:32.257 "trtype": "TCP", 00:27:32.257 "adrfam": "IPv4", 00:27:32.257 "traddr": "10.0.0.2", 00:27:32.257 "trsvcid": "4420", 00:27:32.257 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:32.257 }, 00:27:32.257 "ctrlr_data": { 00:27:32.257 "cntlid": 2, 00:27:32.257 "vendor_id": "0x8086", 00:27:32.257 "model_number": "SPDK bdev Controller", 00:27:32.257 "serial_number": "00000000000000000000", 00:27:32.257 "firmware_revision": "25.01", 00:27:32.257 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.257 "oacs": { 00:27:32.257 "security": 0, 00:27:32.257 "format": 0, 00:27:32.257 "firmware": 0, 00:27:32.257 "ns_manage": 0 00:27:32.257 }, 00:27:32.257 "multi_ctrlr": true, 00:27:32.257 "ana_reporting": false 00:27:32.257 }, 00:27:32.257 "vs": { 00:27:32.257 "nvme_version": "1.3" 00:27:32.257 }, 00:27:32.257 "ns_data": { 00:27:32.257 "id": 1, 00:27:32.257 "can_share": true 00:27:32.257 } 00:27:32.257 } 00:27:32.257 ], 00:27:32.257 "mp_policy": "active_passive" 00:27:32.257 } 00:27:32.257 } 00:27:32.257 ] 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.z6UENwg37M 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.z6UENwg37M 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.z6UENwg37M 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.257 [2024-11-20 14:46:44.059547] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:32.257 [2024-11-20 14:46:44.059691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.257 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.258 [2024-11-20 14:46:44.079613] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:32.258 nvme0n1 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.258 [ 00:27:32.258 { 00:27:32.258 "name": "nvme0n1", 00:27:32.258 "aliases": [ 00:27:32.258 "3891a498-5b03-4cb2-8283-bd747ef4fca2" 00:27:32.258 ], 00:27:32.258 "product_name": "NVMe disk", 00:27:32.258 "block_size": 512, 00:27:32.258 "num_blocks": 2097152, 00:27:32.258 "uuid": "3891a498-5b03-4cb2-8283-bd747ef4fca2", 00:27:32.258 "numa_id": 1, 00:27:32.258 "assigned_rate_limits": { 00:27:32.258 "rw_ios_per_sec": 0, 00:27:32.258 "rw_mbytes_per_sec": 0, 00:27:32.258 "r_mbytes_per_sec": 0, 00:27:32.258 "w_mbytes_per_sec": 0 00:27:32.258 }, 00:27:32.258 "claimed": false, 00:27:32.258 "zoned": false, 00:27:32.258 "supported_io_types": { 00:27:32.258 "read": true, 00:27:32.258 "write": true, 00:27:32.258 "unmap": false, 00:27:32.258 "flush": true, 00:27:32.258 "reset": true, 00:27:32.258 "nvme_admin": true, 00:27:32.258 "nvme_io": true, 00:27:32.258 "nvme_io_md": false, 00:27:32.258 "write_zeroes": true, 00:27:32.258 "zcopy": false, 00:27:32.258 "get_zone_info": false, 00:27:32.258 "zone_management": false, 00:27:32.258 "zone_append": false, 00:27:32.258 "compare": true, 00:27:32.258 "compare_and_write": true, 00:27:32.258 "abort": true, 00:27:32.258 "seek_hole": false, 00:27:32.258 "seek_data": false, 00:27:32.258 "copy": true, 00:27:32.258 "nvme_iov_md": false 00:27:32.258 }, 00:27:32.258 "memory_domains": [ 00:27:32.258 { 00:27:32.258 "dma_device_id": "system", 00:27:32.258 "dma_device_type": 1 00:27:32.258 } 00:27:32.258 ], 00:27:32.258 "driver_specific": { 00:27:32.258 "nvme": [ 00:27:32.258 { 00:27:32.258 "trid": { 00:27:32.258 "trtype": "TCP", 00:27:32.258 "adrfam": "IPv4", 00:27:32.258 "traddr": "10.0.0.2", 00:27:32.258 "trsvcid": "4421", 00:27:32.258 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:32.258 }, 00:27:32.258 "ctrlr_data": { 00:27:32.258 "cntlid": 3, 00:27:32.258 "vendor_id": "0x8086", 00:27:32.258 "model_number": "SPDK bdev Controller", 00:27:32.258 "serial_number": "00000000000000000000", 00:27:32.258 "firmware_revision": "25.01", 00:27:32.258 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.258 "oacs": { 00:27:32.258 "security": 0, 00:27:32.258 "format": 0, 00:27:32.258 "firmware": 0, 00:27:32.258 "ns_manage": 0 00:27:32.258 }, 00:27:32.258 "multi_ctrlr": true, 00:27:32.258 "ana_reporting": false 00:27:32.258 }, 00:27:32.258 "vs": { 00:27:32.258 "nvme_version": "1.3" 00:27:32.258 }, 00:27:32.258 "ns_data": { 00:27:32.258 "id": 1, 00:27:32.258 "can_share": true 00:27:32.258 } 00:27:32.258 } 00:27:32.258 ], 00:27:32.258 "mp_policy": "active_passive" 00:27:32.258 } 00:27:32.258 } 00:27:32.258 ] 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.z6UENwg37M 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:32.258 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:32.258 rmmod nvme_tcp 00:27:32.517 rmmod nvme_fabrics 00:27:32.517 rmmod nvme_keyring 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1674063 ']' 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1674063 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1674063 ']' 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1674063 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1674063 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1674063' 00:27:32.517 killing process with pid 1674063 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1674063 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1674063 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.517 14:46:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:35.057 00:27:35.057 real 0m10.081s 00:27:35.057 user 0m3.872s 00:27:35.057 sys 0m4.840s 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.057 ************************************ 00:27:35.057 END TEST nvmf_async_init 00:27:35.057 ************************************ 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.057 ************************************ 00:27:35.057 START TEST dma 00:27:35.057 ************************************ 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:35.057 * Looking for test storage... 00:27:35.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:27:35.057 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:35.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.058 --rc genhtml_branch_coverage=1 00:27:35.058 --rc genhtml_function_coverage=1 00:27:35.058 --rc genhtml_legend=1 00:27:35.058 --rc geninfo_all_blocks=1 00:27:35.058 --rc geninfo_unexecuted_blocks=1 00:27:35.058 00:27:35.058 ' 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:35.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.058 --rc genhtml_branch_coverage=1 00:27:35.058 --rc genhtml_function_coverage=1 00:27:35.058 --rc genhtml_legend=1 00:27:35.058 --rc geninfo_all_blocks=1 00:27:35.058 --rc geninfo_unexecuted_blocks=1 00:27:35.058 00:27:35.058 ' 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:35.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.058 --rc genhtml_branch_coverage=1 00:27:35.058 --rc genhtml_function_coverage=1 00:27:35.058 --rc genhtml_legend=1 00:27:35.058 --rc geninfo_all_blocks=1 00:27:35.058 --rc geninfo_unexecuted_blocks=1 00:27:35.058 00:27:35.058 ' 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:35.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.058 --rc genhtml_branch_coverage=1 00:27:35.058 --rc genhtml_function_coverage=1 00:27:35.058 --rc genhtml_legend=1 00:27:35.058 --rc geninfo_all_blocks=1 00:27:35.058 --rc geninfo_unexecuted_blocks=1 00:27:35.058 00:27:35.058 ' 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:35.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:27:35.058 00:27:35.058 real 0m0.207s 00:27:35.058 user 0m0.126s 00:27:35.058 sys 0m0.094s 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:35.058 ************************************ 00:27:35.058 END TEST dma 00:27:35.058 ************************************ 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.058 ************************************ 00:27:35.058 START TEST nvmf_identify 00:27:35.058 ************************************ 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:35.058 * Looking for test storage... 00:27:35.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:35.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.058 --rc genhtml_branch_coverage=1 00:27:35.058 --rc genhtml_function_coverage=1 00:27:35.058 --rc genhtml_legend=1 00:27:35.058 --rc geninfo_all_blocks=1 00:27:35.058 --rc geninfo_unexecuted_blocks=1 00:27:35.058 00:27:35.058 ' 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:35.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.058 --rc genhtml_branch_coverage=1 00:27:35.058 --rc genhtml_function_coverage=1 00:27:35.058 --rc genhtml_legend=1 00:27:35.058 --rc geninfo_all_blocks=1 00:27:35.058 --rc geninfo_unexecuted_blocks=1 00:27:35.058 00:27:35.058 ' 00:27:35.058 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:35.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.059 --rc genhtml_branch_coverage=1 00:27:35.059 --rc genhtml_function_coverage=1 00:27:35.059 --rc genhtml_legend=1 00:27:35.059 --rc geninfo_all_blocks=1 00:27:35.059 --rc geninfo_unexecuted_blocks=1 00:27:35.059 00:27:35.059 ' 00:27:35.059 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:35.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.059 --rc genhtml_branch_coverage=1 00:27:35.059 --rc genhtml_function_coverage=1 00:27:35.059 --rc genhtml_legend=1 00:27:35.059 --rc geninfo_all_blocks=1 00:27:35.059 --rc geninfo_unexecuted_blocks=1 00:27:35.059 00:27:35.059 ' 00:27:35.059 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.059 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:35.059 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.059 14:46:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.059 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.059 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.059 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:35.059 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:35.059 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.059 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:35.059 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.059 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:35.059 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:35.059 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:35.059 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.059 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:35.059 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:35.059 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.318 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.318 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:35.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:27:35.319 14:46:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:41.891 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:41.891 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:41.891 Found net devices under 0000:86:00.0: cvl_0_0 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:41.891 Found net devices under 0000:86:00.1: cvl_0_1 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:41.891 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:41.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:27:41.892 00:27:41.892 --- 10.0.0.2 ping statistics --- 00:27:41.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.892 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:41.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:27:41.892 00:27:41.892 --- 10.0.0.1 ping statistics --- 00:27:41.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.892 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1677872 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1677872 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1677872 ']' 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.892 14:46:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.892 [2024-11-20 14:46:53.016516] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:27:41.892 [2024-11-20 14:46:53.016564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.892 [2024-11-20 14:46:53.096415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:41.892 [2024-11-20 14:46:53.138093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.892 [2024-11-20 14:46:53.138132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.892 [2024-11-20 14:46:53.138140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.892 [2024-11-20 14:46:53.138146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.892 [2024-11-20 14:46:53.138150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.892 [2024-11-20 14:46:53.139761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.892 [2024-11-20 14:46:53.139875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:41.892 [2024-11-20 14:46:53.139999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.892 [2024-11-20 14:46:53.140000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:42.152 [2024-11-20 14:46:53.864578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:42.152 Malloc0 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:42.152 [2024-11-20 14:46:53.967214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:42.152 [ 00:27:42.152 { 00:27:42.152 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:42.152 "subtype": "Discovery", 00:27:42.152 "listen_addresses": [ 00:27:42.152 { 00:27:42.152 "trtype": "TCP", 00:27:42.152 "adrfam": "IPv4", 00:27:42.152 "traddr": "10.0.0.2", 00:27:42.152 "trsvcid": "4420" 00:27:42.152 } 00:27:42.152 ], 00:27:42.152 "allow_any_host": true, 00:27:42.152 "hosts": [] 00:27:42.152 }, 00:27:42.152 { 00:27:42.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:42.152 "subtype": "NVMe", 00:27:42.152 "listen_addresses": [ 00:27:42.152 { 00:27:42.152 "trtype": "TCP", 00:27:42.152 "adrfam": "IPv4", 00:27:42.152 "traddr": "10.0.0.2", 00:27:42.152 "trsvcid": "4420" 00:27:42.152 } 00:27:42.152 ], 00:27:42.152 "allow_any_host": true, 00:27:42.152 "hosts": [], 00:27:42.152 "serial_number": "SPDK00000000000001", 00:27:42.152 "model_number": "SPDK bdev Controller", 00:27:42.152 "max_namespaces": 32, 00:27:42.152 "min_cntlid": 1, 00:27:42.152 "max_cntlid": 65519, 00:27:42.152 "namespaces": [ 00:27:42.152 { 00:27:42.152 "nsid": 1, 00:27:42.152 "bdev_name": "Malloc0", 00:27:42.152 "name": "Malloc0", 00:27:42.152 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:42.152 "eui64": "ABCDEF0123456789", 00:27:42.152 "uuid": "f2a4d079-0590-4523-80ac-cbc3506fb565" 00:27:42.152 } 00:27:42.152 ] 00:27:42.152 } 00:27:42.152 ] 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.152 14:46:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:42.152 [2024-11-20 14:46:54.018932] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:27:42.152 [2024-11-20 14:46:54.018974] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1678116 ] 00:27:42.152 [2024-11-20 14:46:54.058894] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:27:42.152 [2024-11-20 14:46:54.058938] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:42.152 [2024-11-20 14:46:54.058944] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:42.152 [2024-11-20 14:46:54.062961] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:42.152 [2024-11-20 14:46:54.062973] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:42.152 [2024-11-20 14:46:54.063604] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:27:42.152 [2024-11-20 14:46:54.063637] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x12a7690 0 00:27:42.152 [2024-11-20 14:46:54.069964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:42.152 [2024-11-20 14:46:54.069977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:42.152 [2024-11-20 14:46:54.069981] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:42.152 [2024-11-20 14:46:54.069984] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:42.153 [2024-11-20 14:46:54.070017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.070023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.070026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7690) 00:27:42.153 [2024-11-20 14:46:54.070038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:42.153 [2024-11-20 14:46:54.070055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309100, cid 0, qid 0 00:27:42.153 [2024-11-20 14:46:54.076957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.153 [2024-11-20 14:46:54.076965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.153 [2024-11-20 14:46:54.076968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.076972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309100) on tqpair=0x12a7690 00:27:42.153 [2024-11-20 14:46:54.076984] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:42.153 [2024-11-20 14:46:54.076990] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:27:42.153 [2024-11-20 14:46:54.076994] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:27:42.153 [2024-11-20 14:46:54.077007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.077011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.077014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7690) 00:27:42.153 [2024-11-20 14:46:54.077021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-11-20 14:46:54.077034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309100, cid 0, qid 0 00:27:42.153 [2024-11-20 14:46:54.077245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.153 [2024-11-20 14:46:54.077251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.153 [2024-11-20 14:46:54.077254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.077260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309100) on tqpair=0x12a7690 00:27:42.153 [2024-11-20 14:46:54.077265] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:27:42.153 [2024-11-20 14:46:54.077271] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:27:42.153 [2024-11-20 14:46:54.077278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.077281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.077284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7690) 00:27:42.153 [2024-11-20 14:46:54.077291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-11-20 14:46:54.077301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309100, cid 0, qid 0 00:27:42.153 [2024-11-20 14:46:54.077390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.153 [2024-11-20 14:46:54.077396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.153 [2024-11-20 14:46:54.077399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.077403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309100) on tqpair=0x12a7690 00:27:42.153 [2024-11-20 14:46:54.077407] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:27:42.153 [2024-11-20 14:46:54.077414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:27:42.153 [2024-11-20 14:46:54.077420] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.077423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.077427] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7690) 00:27:42.153 [2024-11-20 14:46:54.077432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-11-20 14:46:54.077442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309100, cid 0, qid 0 00:27:42.153 [2024-11-20 14:46:54.077506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.153 [2024-11-20 14:46:54.077512] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.153 [2024-11-20 14:46:54.077515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.077518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309100) on tqpair=0x12a7690 00:27:42.153 [2024-11-20 14:46:54.077523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:42.153 [2024-11-20 14:46:54.077531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.077535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.077538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7690) 00:27:42.153 [2024-11-20 14:46:54.077544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-11-20 14:46:54.077553] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309100, cid 0, qid 0 00:27:42.153 [2024-11-20 14:46:54.077643] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.153 [2024-11-20 14:46:54.077648] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.153 [2024-11-20 14:46:54.077651] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.077654] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309100) on tqpair=0x12a7690 00:27:42.153 [2024-11-20 14:46:54.077658] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:27:42.153 [2024-11-20 14:46:54.077665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:27:42.153 [2024-11-20 14:46:54.077671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:42.153 [2024-11-20 14:46:54.077779] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:27:42.153 [2024-11-20 14:46:54.077783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:42.153 [2024-11-20 14:46:54.077791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.077794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.077797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7690) 00:27:42.153 [2024-11-20 14:46:54.077803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-11-20 14:46:54.077813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309100, cid 0, qid 0 00:27:42.153 [2024-11-20 14:46:54.077879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.153 [2024-11-20 14:46:54.077885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.153 [2024-11-20 14:46:54.077888] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.077891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309100) on tqpair=0x12a7690 00:27:42.153 [2024-11-20 14:46:54.077895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:42.153 [2024-11-20 14:46:54.077903] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.077907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.077910] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7690) 00:27:42.153 [2024-11-20 14:46:54.077916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-11-20 14:46:54.077925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309100, cid 0, qid 0 00:27:42.153 [2024-11-20 14:46:54.078029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.153 [2024-11-20 14:46:54.078036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.153 [2024-11-20 14:46:54.078038] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.078042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309100) on tqpair=0x12a7690 00:27:42.153 [2024-11-20 14:46:54.078046] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:42.153 [2024-11-20 14:46:54.078050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:27:42.153 [2024-11-20 14:46:54.078057] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:27:42.153 [2024-11-20 14:46:54.078064] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:27:42.153 [2024-11-20 14:46:54.078072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.078076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7690) 00:27:42.153 [2024-11-20 14:46:54.078082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-11-20 14:46:54.078093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309100, cid 0, qid 0 00:27:42.153 [2024-11-20 14:46:54.078186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.153 [2024-11-20 14:46:54.078191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.153 [2024-11-20 14:46:54.078194] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.078198] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a7690): datao=0, datal=4096, cccid=0 00:27:42.153 [2024-11-20 14:46:54.078202] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1309100) on tqpair(0x12a7690): expected_datao=0, payload_size=4096 00:27:42.153 [2024-11-20 14:46:54.078206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.078234] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.078238] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.078280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.153 [2024-11-20 14:46:54.078286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.153 [2024-11-20 14:46:54.078289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.153 [2024-11-20 14:46:54.078293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309100) on tqpair=0x12a7690 00:27:42.154 [2024-11-20 14:46:54.078299] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:27:42.154 [2024-11-20 14:46:54.078304] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:27:42.154 [2024-11-20 14:46:54.078308] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:27:42.154 [2024-11-20 14:46:54.078317] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:27:42.154 [2024-11-20 14:46:54.078321] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:27:42.154 [2024-11-20 14:46:54.078326] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:27:42.154 [2024-11-20 14:46:54.078336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:27:42.154 [2024-11-20 14:46:54.078342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078349] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7690) 00:27:42.154 [2024-11-20 14:46:54.078355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:42.154 [2024-11-20 14:46:54.078365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309100, cid 0, qid 0 00:27:42.154 [2024-11-20 14:46:54.078429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.154 [2024-11-20 14:46:54.078435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.154 [2024-11-20 14:46:54.078438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309100) on tqpair=0x12a7690 00:27:42.154 [2024-11-20 14:46:54.078447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7690) 00:27:42.154 [2024-11-20 14:46:54.078459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.154 [2024-11-20 14:46:54.078464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078472] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x12a7690) 00:27:42.154 [2024-11-20 14:46:54.078477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.154 [2024-11-20 14:46:54.078483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x12a7690) 00:27:42.154 [2024-11-20 14:46:54.078494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.154 [2024-11-20 14:46:54.078499] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.154 [2024-11-20 14:46:54.078510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.154 [2024-11-20 14:46:54.078514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:42.154 [2024-11-20 14:46:54.078523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:42.154 [2024-11-20 14:46:54.078528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a7690) 00:27:42.154 [2024-11-20 14:46:54.078537] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.154 [2024-11-20 14:46:54.078548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309100, cid 0, qid 0 00:27:42.154 [2024-11-20 14:46:54.078553] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309280, cid 1, qid 0 00:27:42.154 [2024-11-20 14:46:54.078557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309400, cid 2, qid 0 00:27:42.154 [2024-11-20 14:46:54.078561] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.154 [2024-11-20 14:46:54.078565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309700, cid 4, qid 0 00:27:42.154 [2024-11-20 14:46:54.078687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.154 [2024-11-20 14:46:54.078692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.154 [2024-11-20 14:46:54.078695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309700) on tqpair=0x12a7690 00:27:42.154 [2024-11-20 14:46:54.078706] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:27:42.154 [2024-11-20 14:46:54.078710] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:27:42.154 [2024-11-20 14:46:54.078719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078723] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a7690) 00:27:42.154 [2024-11-20 14:46:54.078728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.154 [2024-11-20 14:46:54.078738] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309700, cid 4, qid 0 00:27:42.154 [2024-11-20 14:46:54.078811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.154 [2024-11-20 14:46:54.078817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.154 [2024-11-20 14:46:54.078821] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078825] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a7690): datao=0, datal=4096, cccid=4 00:27:42.154 [2024-11-20 14:46:54.078829] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1309700) on tqpair(0x12a7690): expected_datao=0, payload_size=4096 00:27:42.154 [2024-11-20 14:46:54.078833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078838] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078842] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.154 [2024-11-20 14:46:54.078891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.154 [2024-11-20 14:46:54.078894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078898] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309700) on tqpair=0x12a7690 00:27:42.154 [2024-11-20 14:46:54.078908] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:27:42.154 [2024-11-20 14:46:54.078927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a7690) 00:27:42.154 [2024-11-20 14:46:54.078937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.154 [2024-11-20 14:46:54.078942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.078957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12a7690) 00:27:42.154 [2024-11-20 14:46:54.078963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.154 [2024-11-20 14:46:54.078977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309700, cid 4, qid 0 00:27:42.154 [2024-11-20 14:46:54.078982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309880, cid 5, qid 0 00:27:42.154 [2024-11-20 14:46:54.079098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.154 [2024-11-20 14:46:54.079104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.154 [2024-11-20 14:46:54.079107] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.079110] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a7690): datao=0, datal=1024, cccid=4 00:27:42.154 [2024-11-20 14:46:54.079115] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1309700) on tqpair(0x12a7690): expected_datao=0, payload_size=1024 00:27:42.154 [2024-11-20 14:46:54.079118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.079124] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.079127] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.079132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.154 [2024-11-20 14:46:54.079137] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.154 [2024-11-20 14:46:54.079140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.154 [2024-11-20 14:46:54.079143] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309880) on tqpair=0x12a7690 00:27:42.416 [2024-11-20 14:46:54.120122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.416 [2024-11-20 14:46:54.120135] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.416 [2024-11-20 14:46:54.120139] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.416 [2024-11-20 14:46:54.120143] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309700) on tqpair=0x12a7690 00:27:42.416 [2024-11-20 14:46:54.120157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.416 [2024-11-20 14:46:54.120161] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a7690) 00:27:42.416 [2024-11-20 14:46:54.120168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.416 [2024-11-20 14:46:54.120184] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309700, cid 4, qid 0 00:27:42.416 [2024-11-20 14:46:54.120259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.416 [2024-11-20 14:46:54.120265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.416 [2024-11-20 14:46:54.120268] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.416 [2024-11-20 14:46:54.120271] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a7690): datao=0, datal=3072, cccid=4 00:27:42.416 [2024-11-20 14:46:54.120275] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1309700) on tqpair(0x12a7690): expected_datao=0, payload_size=3072 00:27:42.416 [2024-11-20 14:46:54.120279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.416 [2024-11-20 14:46:54.120295] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.416 [2024-11-20 14:46:54.120299] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.416 [2024-11-20 14:46:54.120371] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.416 [2024-11-20 14:46:54.120377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.416 [2024-11-20 14:46:54.120380] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.417 [2024-11-20 14:46:54.120383] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309700) on tqpair=0x12a7690 00:27:42.417 [2024-11-20 14:46:54.120390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.417 [2024-11-20 14:46:54.120394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a7690) 00:27:42.417 [2024-11-20 14:46:54.120400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.417 [2024-11-20 14:46:54.120413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309700, cid 4, qid 0 00:27:42.417 [2024-11-20 14:46:54.120485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.417 [2024-11-20 14:46:54.120491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.417 [2024-11-20 14:46:54.120494] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.417 [2024-11-20 14:46:54.120497] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a7690): datao=0, datal=8, cccid=4 00:27:42.417 [2024-11-20 14:46:54.120501] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1309700) on tqpair(0x12a7690): expected_datao=0, payload_size=8 00:27:42.417 [2024-11-20 14:46:54.120505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.417 [2024-11-20 14:46:54.120510] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.417 [2024-11-20 14:46:54.120513] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.417 [2024-11-20 14:46:54.161121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.417 [2024-11-20 14:46:54.161132] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.417 [2024-11-20 14:46:54.161136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.417 [2024-11-20 14:46:54.161139] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309700) on tqpair=0x12a7690 00:27:42.417 ===================================================== 00:27:42.417 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:42.417 ===================================================== 00:27:42.417 Controller Capabilities/Features 00:27:42.417 ================================ 00:27:42.417 Vendor ID: 0000 00:27:42.417 Subsystem Vendor ID: 0000 00:27:42.417 Serial Number: .................... 00:27:42.417 Model Number: ........................................ 00:27:42.417 Firmware Version: 25.01 00:27:42.417 Recommended Arb Burst: 0 00:27:42.417 IEEE OUI Identifier: 00 00 00 00:27:42.417 Multi-path I/O 00:27:42.417 May have multiple subsystem ports: No 00:27:42.417 May have multiple controllers: No 00:27:42.417 Associated with SR-IOV VF: No 00:27:42.417 Max Data Transfer Size: 131072 00:27:42.417 Max Number of Namespaces: 0 00:27:42.417 Max Number of I/O Queues: 1024 00:27:42.417 NVMe Specification Version (VS): 1.3 00:27:42.417 NVMe Specification Version (Identify): 1.3 00:27:42.417 Maximum Queue Entries: 128 00:27:42.417 Contiguous Queues Required: Yes 00:27:42.417 Arbitration Mechanisms Supported 00:27:42.417 Weighted Round Robin: Not Supported 00:27:42.417 Vendor Specific: Not Supported 00:27:42.417 Reset Timeout: 15000 ms 00:27:42.417 Doorbell Stride: 4 bytes 00:27:42.417 NVM Subsystem Reset: Not Supported 00:27:42.417 Command Sets Supported 00:27:42.417 NVM Command Set: Supported 00:27:42.417 Boot Partition: Not Supported 00:27:42.417 Memory Page Size Minimum: 4096 bytes 00:27:42.417 Memory Page Size Maximum: 4096 bytes 00:27:42.417 Persistent Memory Region: Not Supported 00:27:42.417 Optional Asynchronous Events Supported 00:27:42.417 Namespace Attribute Notices: Not Supported 00:27:42.417 Firmware Activation Notices: Not Supported 00:27:42.417 ANA Change Notices: Not Supported 00:27:42.417 PLE Aggregate Log Change Notices: Not Supported 00:27:42.417 LBA Status Info Alert Notices: Not Supported 00:27:42.417 EGE Aggregate Log Change Notices: Not Supported 00:27:42.417 Normal NVM Subsystem Shutdown event: Not Supported 00:27:42.417 Zone Descriptor Change Notices: Not Supported 00:27:42.417 Discovery Log Change Notices: Supported 00:27:42.417 Controller Attributes 00:27:42.417 128-bit Host Identifier: Not Supported 00:27:42.417 Non-Operational Permissive Mode: Not Supported 00:27:42.417 NVM Sets: Not Supported 00:27:42.417 Read Recovery Levels: Not Supported 00:27:42.417 Endurance Groups: Not Supported 00:27:42.417 Predictable Latency Mode: Not Supported 00:27:42.417 Traffic Based Keep ALive: Not Supported 00:27:42.417 Namespace Granularity: Not Supported 00:27:42.417 SQ Associations: Not Supported 00:27:42.417 UUID List: Not Supported 00:27:42.417 Multi-Domain Subsystem: Not Supported 00:27:42.417 Fixed Capacity Management: Not Supported 00:27:42.417 Variable Capacity Management: Not Supported 00:27:42.417 Delete Endurance Group: Not Supported 00:27:42.417 Delete NVM Set: Not Supported 00:27:42.417 Extended LBA Formats Supported: Not Supported 00:27:42.417 Flexible Data Placement Supported: Not Supported 00:27:42.417 00:27:42.417 Controller Memory Buffer Support 00:27:42.417 ================================ 00:27:42.417 Supported: No 00:27:42.417 00:27:42.417 Persistent Memory Region Support 00:27:42.417 ================================ 00:27:42.417 Supported: No 00:27:42.417 00:27:42.417 Admin Command Set Attributes 00:27:42.417 ============================ 00:27:42.417 Security Send/Receive: Not Supported 00:27:42.417 Format NVM: Not Supported 00:27:42.417 Firmware Activate/Download: Not Supported 00:27:42.417 Namespace Management: Not Supported 00:27:42.417 Device Self-Test: Not Supported 00:27:42.417 Directives: Not Supported 00:27:42.417 NVMe-MI: Not Supported 00:27:42.417 Virtualization Management: Not Supported 00:27:42.417 Doorbell Buffer Config: Not Supported 00:27:42.417 Get LBA Status Capability: Not Supported 00:27:42.417 Command & Feature Lockdown Capability: Not Supported 00:27:42.417 Abort Command Limit: 1 00:27:42.417 Async Event Request Limit: 4 00:27:42.417 Number of Firmware Slots: N/A 00:27:42.417 Firmware Slot 1 Read-Only: N/A 00:27:42.417 Firmware Activation Without Reset: N/A 00:27:42.417 Multiple Update Detection Support: N/A 00:27:42.417 Firmware Update Granularity: No Information Provided 00:27:42.417 Per-Namespace SMART Log: No 00:27:42.417 Asymmetric Namespace Access Log Page: Not Supported 00:27:42.417 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:42.417 Command Effects Log Page: Not Supported 00:27:42.417 Get Log Page Extended Data: Supported 00:27:42.417 Telemetry Log Pages: Not Supported 00:27:42.417 Persistent Event Log Pages: Not Supported 00:27:42.417 Supported Log Pages Log Page: May Support 00:27:42.417 Commands Supported & Effects Log Page: Not Supported 00:27:42.417 Feature Identifiers & Effects Log Page:May Support 00:27:42.417 NVMe-MI Commands & Effects Log Page: May Support 00:27:42.417 Data Area 4 for Telemetry Log: Not Supported 00:27:42.417 Error Log Page Entries Supported: 128 00:27:42.417 Keep Alive: Not Supported 00:27:42.417 00:27:42.417 NVM Command Set Attributes 00:27:42.417 ========================== 00:27:42.417 Submission Queue Entry Size 00:27:42.417 Max: 1 00:27:42.417 Min: 1 00:27:42.417 Completion Queue Entry Size 00:27:42.417 Max: 1 00:27:42.417 Min: 1 00:27:42.417 Number of Namespaces: 0 00:27:42.417 Compare Command: Not Supported 00:27:42.417 Write Uncorrectable Command: Not Supported 00:27:42.417 Dataset Management Command: Not Supported 00:27:42.417 Write Zeroes Command: Not Supported 00:27:42.417 Set Features Save Field: Not Supported 00:27:42.417 Reservations: Not Supported 00:27:42.417 Timestamp: Not Supported 00:27:42.417 Copy: Not Supported 00:27:42.417 Volatile Write Cache: Not Present 00:27:42.417 Atomic Write Unit (Normal): 1 00:27:42.417 Atomic Write Unit (PFail): 1 00:27:42.417 Atomic Compare & Write Unit: 1 00:27:42.417 Fused Compare & Write: Supported 00:27:42.417 Scatter-Gather List 00:27:42.417 SGL Command Set: Supported 00:27:42.417 SGL Keyed: Supported 00:27:42.417 SGL Bit Bucket Descriptor: Not Supported 00:27:42.417 SGL Metadata Pointer: Not Supported 00:27:42.417 Oversized SGL: Not Supported 00:27:42.417 SGL Metadata Address: Not Supported 00:27:42.417 SGL Offset: Supported 00:27:42.417 Transport SGL Data Block: Not Supported 00:27:42.417 Replay Protected Memory Block: Not Supported 00:27:42.417 00:27:42.417 Firmware Slot Information 00:27:42.417 ========================= 00:27:42.417 Active slot: 0 00:27:42.417 00:27:42.417 00:27:42.417 Error Log 00:27:42.417 ========= 00:27:42.417 00:27:42.417 Active Namespaces 00:27:42.417 ================= 00:27:42.417 Discovery Log Page 00:27:42.417 ================== 00:27:42.417 Generation Counter: 2 00:27:42.417 Number of Records: 2 00:27:42.417 Record Format: 0 00:27:42.417 00:27:42.417 Discovery Log Entry 0 00:27:42.417 ---------------------- 00:27:42.417 Transport Type: 3 (TCP) 00:27:42.417 Address Family: 1 (IPv4) 00:27:42.417 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:42.417 Entry Flags: 00:27:42.417 Duplicate Returned Information: 1 00:27:42.417 Explicit Persistent Connection Support for Discovery: 1 00:27:42.417 Transport Requirements: 00:27:42.417 Secure Channel: Not Required 00:27:42.417 Port ID: 0 (0x0000) 00:27:42.417 Controller ID: 65535 (0xffff) 00:27:42.418 Admin Max SQ Size: 128 00:27:42.418 Transport Service Identifier: 4420 00:27:42.418 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:42.418 Transport Address: 10.0.0.2 00:27:42.418 Discovery Log Entry 1 00:27:42.418 ---------------------- 00:27:42.418 Transport Type: 3 (TCP) 00:27:42.418 Address Family: 1 (IPv4) 00:27:42.418 Subsystem Type: 2 (NVM Subsystem) 00:27:42.418 Entry Flags: 00:27:42.418 Duplicate Returned Information: 0 00:27:42.418 Explicit Persistent Connection Support for Discovery: 0 00:27:42.418 Transport Requirements: 00:27:42.418 Secure Channel: Not Required 00:27:42.418 Port ID: 0 (0x0000) 00:27:42.418 Controller ID: 65535 (0xffff) 00:27:42.418 Admin Max SQ Size: 128 00:27:42.418 Transport Service Identifier: 4420 00:27:42.418 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:42.418 Transport Address: 10.0.0.2 [2024-11-20 14:46:54.161224] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:27:42.418 [2024-11-20 14:46:54.161234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309100) on tqpair=0x12a7690 00:27:42.418 [2024-11-20 14:46:54.161240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.418 [2024-11-20 14:46:54.161245] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309280) on tqpair=0x12a7690 00:27:42.418 [2024-11-20 14:46:54.161251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.418 [2024-11-20 14:46:54.161255] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309400) on tqpair=0x12a7690 00:27:42.418 [2024-11-20 14:46:54.161259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.418 [2024-11-20 14:46:54.161264] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.418 [2024-11-20 14:46:54.161268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.418 [2024-11-20 14:46:54.161277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.161281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.161284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.418 [2024-11-20 14:46:54.161291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.418 [2024-11-20 14:46:54.161305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.418 [2024-11-20 14:46:54.161423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.418 [2024-11-20 14:46:54.161429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.418 [2024-11-20 14:46:54.161432] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.161435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.418 [2024-11-20 14:46:54.161441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.161445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.161448] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.418 [2024-11-20 14:46:54.161453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.418 [2024-11-20 14:46:54.161466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.418 [2024-11-20 14:46:54.161573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.418 [2024-11-20 14:46:54.161579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.418 [2024-11-20 14:46:54.161582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.161585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.418 [2024-11-20 14:46:54.161589] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:27:42.418 [2024-11-20 14:46:54.161593] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:27:42.418 [2024-11-20 14:46:54.161601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.161605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.161608] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.418 [2024-11-20 14:46:54.161614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.418 [2024-11-20 14:46:54.161623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.418 [2024-11-20 14:46:54.161694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.418 [2024-11-20 14:46:54.161700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.418 [2024-11-20 14:46:54.161703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.161706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.418 [2024-11-20 14:46:54.161717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.161720] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.161723] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.418 [2024-11-20 14:46:54.161729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.418 [2024-11-20 14:46:54.161739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.418 [2024-11-20 14:46:54.161824] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.418 [2024-11-20 14:46:54.161830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.418 [2024-11-20 14:46:54.161833] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.161836] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.418 [2024-11-20 14:46:54.161844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.161848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.161851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.418 [2024-11-20 14:46:54.161857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.418 [2024-11-20 14:46:54.161866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.418 [2024-11-20 14:46:54.161926] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.418 [2024-11-20 14:46:54.161931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.418 [2024-11-20 14:46:54.161934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.161938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.418 [2024-11-20 14:46:54.161946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.161958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.161961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.418 [2024-11-20 14:46:54.161967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.418 [2024-11-20 14:46:54.161977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.418 [2024-11-20 14:46:54.162098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.418 [2024-11-20 14:46:54.162104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.418 [2024-11-20 14:46:54.162107] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.162110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.418 [2024-11-20 14:46:54.162119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.162123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.162126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.418 [2024-11-20 14:46:54.162131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.418 [2024-11-20 14:46:54.162141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.418 [2024-11-20 14:46:54.162206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.418 [2024-11-20 14:46:54.162212] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.418 [2024-11-20 14:46:54.162215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.162218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.418 [2024-11-20 14:46:54.162227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.162232] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.162235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.418 [2024-11-20 14:46:54.162241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.418 [2024-11-20 14:46:54.162250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.418 [2024-11-20 14:46:54.162336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.418 [2024-11-20 14:46:54.162341] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.418 [2024-11-20 14:46:54.162344] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.162347] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.418 [2024-11-20 14:46:54.162357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.162360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.418 [2024-11-20 14:46:54.162363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.418 [2024-11-20 14:46:54.162369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.418 [2024-11-20 14:46:54.162379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.418 [2024-11-20 14:46:54.162480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.418 [2024-11-20 14:46:54.162486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.419 [2024-11-20 14:46:54.162489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.162492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.419 [2024-11-20 14:46:54.162500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.162504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.162507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.419 [2024-11-20 14:46:54.162512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.419 [2024-11-20 14:46:54.162522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.419 [2024-11-20 14:46:54.162631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.419 [2024-11-20 14:46:54.162636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.419 [2024-11-20 14:46:54.162639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.162642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.419 [2024-11-20 14:46:54.162651] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.162654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.162657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.419 [2024-11-20 14:46:54.162663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.419 [2024-11-20 14:46:54.162672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.419 [2024-11-20 14:46:54.162732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.419 [2024-11-20 14:46:54.162737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.419 [2024-11-20 14:46:54.162740] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.162744] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.419 [2024-11-20 14:46:54.162752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.162755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.162759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.419 [2024-11-20 14:46:54.162766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.419 [2024-11-20 14:46:54.162776] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.419 [2024-11-20 14:46:54.162884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.419 [2024-11-20 14:46:54.162890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.419 [2024-11-20 14:46:54.162892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.162896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.419 [2024-11-20 14:46:54.162904] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.162907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.162910] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.419 [2024-11-20 14:46:54.162916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.419 [2024-11-20 14:46:54.162925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.419 [2024-11-20 14:46:54.163034] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.419 [2024-11-20 14:46:54.163040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.419 [2024-11-20 14:46:54.163043] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.419 [2024-11-20 14:46:54.163055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.419 [2024-11-20 14:46:54.163067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.419 [2024-11-20 14:46:54.163077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.419 [2024-11-20 14:46:54.163185] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.419 [2024-11-20 14:46:54.163191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.419 [2024-11-20 14:46:54.163194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.419 [2024-11-20 14:46:54.163205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.419 [2024-11-20 14:46:54.163217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.419 [2024-11-20 14:46:54.163227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.419 [2024-11-20 14:46:54.163293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.419 [2024-11-20 14:46:54.163298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.419 [2024-11-20 14:46:54.163301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163304] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.419 [2024-11-20 14:46:54.163313] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.419 [2024-11-20 14:46:54.163327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.419 [2024-11-20 14:46:54.163337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.419 [2024-11-20 14:46:54.163438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.419 [2024-11-20 14:46:54.163444] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.419 [2024-11-20 14:46:54.163447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.419 [2024-11-20 14:46:54.163458] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163461] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.419 [2024-11-20 14:46:54.163470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.419 [2024-11-20 14:46:54.163480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.419 [2024-11-20 14:46:54.163588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.419 [2024-11-20 14:46:54.163594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.419 [2024-11-20 14:46:54.163597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.419 [2024-11-20 14:46:54.163608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163614] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.419 [2024-11-20 14:46:54.163620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.419 [2024-11-20 14:46:54.163629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.419 [2024-11-20 14:46:54.163689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.419 [2024-11-20 14:46:54.163695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.419 [2024-11-20 14:46:54.163698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.419 [2024-11-20 14:46:54.163709] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163713] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.419 [2024-11-20 14:46:54.163722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.419 [2024-11-20 14:46:54.163731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.419 [2024-11-20 14:46:54.163790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.419 [2024-11-20 14:46:54.163796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.419 [2024-11-20 14:46:54.163799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.419 [2024-11-20 14:46:54.163810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.419 [2024-11-20 14:46:54.163822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.419 [2024-11-20 14:46:54.163834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.419 [2024-11-20 14:46:54.163942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.419 [2024-11-20 14:46:54.163953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.419 [2024-11-20 14:46:54.163956] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.419 [2024-11-20 14:46:54.163967] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.419 [2024-11-20 14:46:54.163974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.419 [2024-11-20 14:46:54.163980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.419 [2024-11-20 14:46:54.163989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.419 [2024-11-20 14:46:54.164094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.420 [2024-11-20 14:46:54.164099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.420 [2024-11-20 14:46:54.164102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.420 [2024-11-20 14:46:54.164113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.420 [2024-11-20 14:46:54.164126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.420 [2024-11-20 14:46:54.164135] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.420 [2024-11-20 14:46:54.164194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.420 [2024-11-20 14:46:54.164199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.420 [2024-11-20 14:46:54.164202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164205] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.420 [2024-11-20 14:46:54.164213] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.420 [2024-11-20 14:46:54.164225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.420 [2024-11-20 14:46:54.164235] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.420 [2024-11-20 14:46:54.164297] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.420 [2024-11-20 14:46:54.164303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.420 [2024-11-20 14:46:54.164306] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164309] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.420 [2024-11-20 14:46:54.164317] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164321] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.420 [2024-11-20 14:46:54.164329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.420 [2024-11-20 14:46:54.164339] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.420 [2024-11-20 14:46:54.164452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.420 [2024-11-20 14:46:54.164458] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.420 [2024-11-20 14:46:54.164460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.420 [2024-11-20 14:46:54.164472] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.420 [2024-11-20 14:46:54.164484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.420 [2024-11-20 14:46:54.164494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.420 [2024-11-20 14:46:54.164596] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.420 [2024-11-20 14:46:54.164602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.420 [2024-11-20 14:46:54.164605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.420 [2024-11-20 14:46:54.164616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164620] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.420 [2024-11-20 14:46:54.164628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.420 [2024-11-20 14:46:54.164638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.420 [2024-11-20 14:46:54.164748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.420 [2024-11-20 14:46:54.164754] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.420 [2024-11-20 14:46:54.164757] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.420 [2024-11-20 14:46:54.164769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164772] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.420 [2024-11-20 14:46:54.164781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.420 [2024-11-20 14:46:54.164790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.420 [2024-11-20 14:46:54.164850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.420 [2024-11-20 14:46:54.164856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.420 [2024-11-20 14:46:54.164859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.420 [2024-11-20 14:46:54.164870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164873] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.164877] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.420 [2024-11-20 14:46:54.164882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.420 [2024-11-20 14:46:54.164891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.420 [2024-11-20 14:46:54.168955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.420 [2024-11-20 14:46:54.168965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.420 [2024-11-20 14:46:54.168968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.168972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.420 [2024-11-20 14:46:54.168981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.168984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.168988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7690) 00:27:42.420 [2024-11-20 14:46:54.168993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.420 [2024-11-20 14:46:54.169004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1309580, cid 3, qid 0 00:27:42.420 [2024-11-20 14:46:54.169190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.420 [2024-11-20 14:46:54.169196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.420 [2024-11-20 14:46:54.169199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.169202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1309580) on tqpair=0x12a7690 00:27:42.420 [2024-11-20 14:46:54.169208] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:27:42.420 00:27:42.420 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:42.420 [2024-11-20 14:46:54.208433] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:27:42.420 [2024-11-20 14:46:54.208483] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1678119 ] 00:27:42.420 [2024-11-20 14:46:54.247584] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:27:42.420 [2024-11-20 14:46:54.247622] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:42.420 [2024-11-20 14:46:54.247627] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:42.420 [2024-11-20 14:46:54.247638] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:42.420 [2024-11-20 14:46:54.247647] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:42.420 [2024-11-20 14:46:54.251128] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:27:42.420 [2024-11-20 14:46:54.251151] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1306690 0 00:27:42.420 [2024-11-20 14:46:54.258965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:42.420 [2024-11-20 14:46:54.258977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:42.420 [2024-11-20 14:46:54.258981] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:42.420 [2024-11-20 14:46:54.258984] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:42.420 [2024-11-20 14:46:54.259009] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.259014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.259018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1306690) 00:27:42.420 [2024-11-20 14:46:54.259028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:42.420 [2024-11-20 14:46:54.259046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368100, cid 0, qid 0 00:27:42.420 [2024-11-20 14:46:54.266957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.420 [2024-11-20 14:46:54.266965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.420 [2024-11-20 14:46:54.266968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.420 [2024-11-20 14:46:54.266972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368100) on tqpair=0x1306690 00:27:42.420 [2024-11-20 14:46:54.266982] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:42.420 [2024-11-20 14:46:54.266988] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:27:42.421 [2024-11-20 14:46:54.266993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:27:42.421 [2024-11-20 14:46:54.267003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.267007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.267010] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1306690) 00:27:42.421 [2024-11-20 14:46:54.267017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.421 [2024-11-20 14:46:54.267029] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368100, cid 0, qid 0 00:27:42.421 [2024-11-20 14:46:54.267188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.421 [2024-11-20 14:46:54.267194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.421 [2024-11-20 14:46:54.267197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.267200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368100) on tqpair=0x1306690 00:27:42.421 [2024-11-20 14:46:54.267205] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:27:42.421 [2024-11-20 14:46:54.267211] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:27:42.421 [2024-11-20 14:46:54.267218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.267222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.267225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1306690) 00:27:42.421 [2024-11-20 14:46:54.267231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.421 [2024-11-20 14:46:54.267241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368100, cid 0, qid 0 00:27:42.421 [2024-11-20 14:46:54.267305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.421 [2024-11-20 14:46:54.267311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.421 [2024-11-20 14:46:54.267314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.267317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368100) on tqpair=0x1306690 00:27:42.421 [2024-11-20 14:46:54.267322] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:27:42.421 [2024-11-20 14:46:54.267329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:27:42.421 [2024-11-20 14:46:54.267335] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.267339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.267342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1306690) 00:27:42.421 [2024-11-20 14:46:54.267348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.421 [2024-11-20 14:46:54.267360] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368100, cid 0, qid 0 00:27:42.421 [2024-11-20 14:46:54.267428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.421 [2024-11-20 14:46:54.267434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.421 [2024-11-20 14:46:54.267437] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.267440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368100) on tqpair=0x1306690 00:27:42.421 [2024-11-20 14:46:54.267444] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:42.421 [2024-11-20 14:46:54.267453] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.267456] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.267459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1306690) 00:27:42.421 [2024-11-20 14:46:54.267465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.421 [2024-11-20 14:46:54.267475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368100, cid 0, qid 0 00:27:42.421 [2024-11-20 14:46:54.267538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.421 [2024-11-20 14:46:54.267544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.421 [2024-11-20 14:46:54.267547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.267551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368100) on tqpair=0x1306690 00:27:42.421 [2024-11-20 14:46:54.267555] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:27:42.421 [2024-11-20 14:46:54.267559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:27:42.421 [2024-11-20 14:46:54.267565] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:42.421 [2024-11-20 14:46:54.267673] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:27:42.421 [2024-11-20 14:46:54.267677] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:42.421 [2024-11-20 14:46:54.267684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.267687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.267690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1306690) 00:27:42.421 [2024-11-20 14:46:54.267696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.421 [2024-11-20 14:46:54.267706] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368100, cid 0, qid 0 00:27:42.421 [2024-11-20 14:46:54.267764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.421 [2024-11-20 14:46:54.267770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.421 [2024-11-20 14:46:54.267773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.267777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368100) on tqpair=0x1306690 00:27:42.421 [2024-11-20 14:46:54.267781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:42.421 [2024-11-20 14:46:54.267789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.267792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.267796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1306690) 00:27:42.421 [2024-11-20 14:46:54.267801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.421 [2024-11-20 14:46:54.267814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368100, cid 0, qid 0 00:27:42.421 [2024-11-20 14:46:54.267872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.421 [2024-11-20 14:46:54.267878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.421 [2024-11-20 14:46:54.267881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.267884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368100) on tqpair=0x1306690 00:27:42.421 [2024-11-20 14:46:54.267888] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:42.421 [2024-11-20 14:46:54.267892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:27:42.421 [2024-11-20 14:46:54.267899] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:27:42.421 [2024-11-20 14:46:54.267907] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:27:42.421 [2024-11-20 14:46:54.267915] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.267919] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1306690) 00:27:42.421 [2024-11-20 14:46:54.267925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.421 [2024-11-20 14:46:54.267935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368100, cid 0, qid 0 00:27:42.421 [2024-11-20 14:46:54.268045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.421 [2024-11-20 14:46:54.268052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.421 [2024-11-20 14:46:54.268055] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.268058] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1306690): datao=0, datal=4096, cccid=0 00:27:42.421 [2024-11-20 14:46:54.268062] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1368100) on tqpair(0x1306690): expected_datao=0, payload_size=4096 00:27:42.421 [2024-11-20 14:46:54.268066] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.268079] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.268083] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.421 [2024-11-20 14:46:54.309080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.421 [2024-11-20 14:46:54.309093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.422 [2024-11-20 14:46:54.309096] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368100) on tqpair=0x1306690 00:27:42.422 [2024-11-20 14:46:54.309107] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:27:42.422 [2024-11-20 14:46:54.309112] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:27:42.422 [2024-11-20 14:46:54.309116] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:27:42.422 [2024-11-20 14:46:54.309123] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:27:42.422 [2024-11-20 14:46:54.309127] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:27:42.422 [2024-11-20 14:46:54.309132] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:27:42.422 [2024-11-20 14:46:54.309142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:27:42.422 [2024-11-20 14:46:54.309151] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309155] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309158] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1306690) 00:27:42.422 [2024-11-20 14:46:54.309166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:42.422 [2024-11-20 14:46:54.309178] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368100, cid 0, qid 0 00:27:42.422 [2024-11-20 14:46:54.309248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.422 [2024-11-20 14:46:54.309254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.422 [2024-11-20 14:46:54.309257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368100) on tqpair=0x1306690 00:27:42.422 [2024-11-20 14:46:54.309266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1306690) 00:27:42.422 [2024-11-20 14:46:54.309278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.422 [2024-11-20 14:46:54.309283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1306690) 00:27:42.422 [2024-11-20 14:46:54.309295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.422 [2024-11-20 14:46:54.309300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309307] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1306690) 00:27:42.422 [2024-11-20 14:46:54.309312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.422 [2024-11-20 14:46:54.309317] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309320] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309323] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.422 [2024-11-20 14:46:54.309328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.422 [2024-11-20 14:46:54.309332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:42.422 [2024-11-20 14:46:54.309340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:42.422 [2024-11-20 14:46:54.309346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309349] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1306690) 00:27:42.422 [2024-11-20 14:46:54.309355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.422 [2024-11-20 14:46:54.309365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368100, cid 0, qid 0 00:27:42.422 [2024-11-20 14:46:54.309370] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368280, cid 1, qid 0 00:27:42.422 [2024-11-20 14:46:54.309375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368400, cid 2, qid 0 00:27:42.422 [2024-11-20 14:46:54.309379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.422 [2024-11-20 14:46:54.309384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368700, cid 4, qid 0 00:27:42.422 [2024-11-20 14:46:54.309475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.422 [2024-11-20 14:46:54.309481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.422 [2024-11-20 14:46:54.309484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368700) on tqpair=0x1306690 00:27:42.422 [2024-11-20 14:46:54.309493] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:27:42.422 [2024-11-20 14:46:54.309498] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:42.422 [2024-11-20 14:46:54.309505] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:27:42.422 [2024-11-20 14:46:54.309510] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:42.422 [2024-11-20 14:46:54.309515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309522] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1306690) 00:27:42.422 [2024-11-20 14:46:54.309528] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:42.422 [2024-11-20 14:46:54.309537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368700, cid 4, qid 0 00:27:42.422 [2024-11-20 14:46:54.309602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.422 [2024-11-20 14:46:54.309607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.422 [2024-11-20 14:46:54.309610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368700) on tqpair=0x1306690 00:27:42.422 [2024-11-20 14:46:54.309666] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:27:42.422 [2024-11-20 14:46:54.309676] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:42.422 [2024-11-20 14:46:54.309683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1306690) 00:27:42.422 [2024-11-20 14:46:54.309692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.422 [2024-11-20 14:46:54.309702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368700, cid 4, qid 0 00:27:42.422 [2024-11-20 14:46:54.309772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.422 [2024-11-20 14:46:54.309778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.422 [2024-11-20 14:46:54.309781] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309785] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1306690): datao=0, datal=4096, cccid=4 00:27:42.422 [2024-11-20 14:46:54.309789] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1368700) on tqpair(0x1306690): expected_datao=0, payload_size=4096 00:27:42.422 [2024-11-20 14:46:54.309792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309804] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.309809] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.350078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.422 [2024-11-20 14:46:54.350091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.422 [2024-11-20 14:46:54.350094] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.350098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368700) on tqpair=0x1306690 00:27:42.422 [2024-11-20 14:46:54.350112] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:27:42.422 [2024-11-20 14:46:54.350121] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:27:42.422 [2024-11-20 14:46:54.350131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:27:42.422 [2024-11-20 14:46:54.350138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.350141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1306690) 00:27:42.422 [2024-11-20 14:46:54.350147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.422 [2024-11-20 14:46:54.350160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368700, cid 4, qid 0 00:27:42.422 [2024-11-20 14:46:54.350240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.422 [2024-11-20 14:46:54.350246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.422 [2024-11-20 14:46:54.350250] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.350253] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1306690): datao=0, datal=4096, cccid=4 00:27:42.422 [2024-11-20 14:46:54.350256] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1368700) on tqpair(0x1306690): expected_datao=0, payload_size=4096 00:27:42.422 [2024-11-20 14:46:54.350260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.350266] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.350270] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.350293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.422 [2024-11-20 14:46:54.350298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.422 [2024-11-20 14:46:54.350301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.422 [2024-11-20 14:46:54.350305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368700) on tqpair=0x1306690 00:27:42.422 [2024-11-20 14:46:54.350316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:42.423 [2024-11-20 14:46:54.350325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:42.423 [2024-11-20 14:46:54.350332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.423 [2024-11-20 14:46:54.350336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1306690) 00:27:42.423 [2024-11-20 14:46:54.350342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.423 [2024-11-20 14:46:54.350353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368700, cid 4, qid 0 00:27:42.423 [2024-11-20 14:46:54.350430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.423 [2024-11-20 14:46:54.350436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.423 [2024-11-20 14:46:54.350439] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.423 [2024-11-20 14:46:54.350442] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1306690): datao=0, datal=4096, cccid=4 00:27:42.423 [2024-11-20 14:46:54.350446] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1368700) on tqpair(0x1306690): expected_datao=0, payload_size=4096 00:27:42.423 [2024-11-20 14:46:54.350452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.423 [2024-11-20 14:46:54.350462] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.423 [2024-11-20 14:46:54.350465] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.684 [2024-11-20 14:46:54.391957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.684 [2024-11-20 14:46:54.391967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.684 [2024-11-20 14:46:54.391970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.684 [2024-11-20 14:46:54.391974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368700) on tqpair=0x1306690 00:27:42.684 [2024-11-20 14:46:54.391982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:42.684 [2024-11-20 14:46:54.391991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:27:42.684 [2024-11-20 14:46:54.391999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:27:42.684 [2024-11-20 14:46:54.392005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:42.684 [2024-11-20 14:46:54.392009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:42.684 [2024-11-20 14:46:54.392014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:27:42.684 [2024-11-20 14:46:54.392018] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:27:42.684 [2024-11-20 14:46:54.392023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:27:42.684 [2024-11-20 14:46:54.392027] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:27:42.684 [2024-11-20 14:46:54.392039] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.684 [2024-11-20 14:46:54.392042] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1306690) 00:27:42.685 [2024-11-20 14:46:54.392049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.685 [2024-11-20 14:46:54.392055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1306690) 00:27:42.685 [2024-11-20 14:46:54.392066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:42.685 [2024-11-20 14:46:54.392081] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368700, cid 4, qid 0 00:27:42.685 [2024-11-20 14:46:54.392086] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368880, cid 5, qid 0 00:27:42.685 [2024-11-20 14:46:54.392172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.685 [2024-11-20 14:46:54.392178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.685 [2024-11-20 14:46:54.392181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368700) on tqpair=0x1306690 00:27:42.685 [2024-11-20 14:46:54.392190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.685 [2024-11-20 14:46:54.392195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.685 [2024-11-20 14:46:54.392198] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368880) on tqpair=0x1306690 00:27:42.685 [2024-11-20 14:46:54.392211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1306690) 00:27:42.685 [2024-11-20 14:46:54.392221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.685 [2024-11-20 14:46:54.392231] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368880, cid 5, qid 0 00:27:42.685 [2024-11-20 14:46:54.392295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.685 [2024-11-20 14:46:54.392301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.685 [2024-11-20 14:46:54.392304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368880) on tqpair=0x1306690 00:27:42.685 [2024-11-20 14:46:54.392315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1306690) 00:27:42.685 [2024-11-20 14:46:54.392324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.685 [2024-11-20 14:46:54.392333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368880, cid 5, qid 0 00:27:42.685 [2024-11-20 14:46:54.392396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.685 [2024-11-20 14:46:54.392402] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.685 [2024-11-20 14:46:54.392404] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392408] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368880) on tqpair=0x1306690 00:27:42.685 [2024-11-20 14:46:54.392415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1306690) 00:27:42.685 [2024-11-20 14:46:54.392424] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.685 [2024-11-20 14:46:54.392434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368880, cid 5, qid 0 00:27:42.685 [2024-11-20 14:46:54.392493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.685 [2024-11-20 14:46:54.392498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.685 [2024-11-20 14:46:54.392501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368880) on tqpair=0x1306690 00:27:42.685 [2024-11-20 14:46:54.392517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392521] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1306690) 00:27:42.685 [2024-11-20 14:46:54.392527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.685 [2024-11-20 14:46:54.392533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392537] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1306690) 00:27:42.685 [2024-11-20 14:46:54.392542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.685 [2024-11-20 14:46:54.392548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1306690) 00:27:42.685 [2024-11-20 14:46:54.392557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.685 [2024-11-20 14:46:54.392567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392570] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1306690) 00:27:42.685 [2024-11-20 14:46:54.392575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.685 [2024-11-20 14:46:54.392586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368880, cid 5, qid 0 00:27:42.685 [2024-11-20 14:46:54.392591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368700, cid 4, qid 0 00:27:42.685 [2024-11-20 14:46:54.392595] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368a00, cid 6, qid 0 00:27:42.685 [2024-11-20 14:46:54.392599] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368b80, cid 7, qid 0 00:27:42.685 [2024-11-20 14:46:54.392743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.685 [2024-11-20 14:46:54.392749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.685 [2024-11-20 14:46:54.392752] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392756] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1306690): datao=0, datal=8192, cccid=5 00:27:42.685 [2024-11-20 14:46:54.392759] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1368880) on tqpair(0x1306690): expected_datao=0, payload_size=8192 00:27:42.685 [2024-11-20 14:46:54.392763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392777] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392781] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.685 [2024-11-20 14:46:54.392794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.685 [2024-11-20 14:46:54.392797] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392801] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1306690): datao=0, datal=512, cccid=4 00:27:42.685 [2024-11-20 14:46:54.392805] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1368700) on tqpair(0x1306690): expected_datao=0, payload_size=512 00:27:42.685 [2024-11-20 14:46:54.392808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392814] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392817] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392822] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.685 [2024-11-20 14:46:54.392826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.685 [2024-11-20 14:46:54.392829] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392833] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1306690): datao=0, datal=512, cccid=6 00:27:42.685 [2024-11-20 14:46:54.392836] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1368a00) on tqpair(0x1306690): expected_datao=0, payload_size=512 00:27:42.685 [2024-11-20 14:46:54.392840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392846] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392849] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:42.685 [2024-11-20 14:46:54.392858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:42.685 [2024-11-20 14:46:54.392861] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392864] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1306690): datao=0, datal=4096, cccid=7 00:27:42.685 [2024-11-20 14:46:54.392868] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1368b80) on tqpair(0x1306690): expected_datao=0, payload_size=4096 00:27:42.685 [2024-11-20 14:46:54.392872] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392879] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392883] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:42.685 [2024-11-20 14:46:54.392890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.686 [2024-11-20 14:46:54.392895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.686 [2024-11-20 14:46:54.392898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.686 [2024-11-20 14:46:54.392901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368880) on tqpair=0x1306690 00:27:42.686 [2024-11-20 14:46:54.392911] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.686 [2024-11-20 14:46:54.392916] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.686 [2024-11-20 14:46:54.392919] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.686 [2024-11-20 14:46:54.392923] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368700) on tqpair=0x1306690 00:27:42.686 [2024-11-20 14:46:54.392931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.686 [2024-11-20 14:46:54.392936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.686 [2024-11-20 14:46:54.392939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.686 [2024-11-20 14:46:54.392942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368a00) on tqpair=0x1306690 00:27:42.686 [2024-11-20 14:46:54.392953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.686 [2024-11-20 14:46:54.392959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.686 [2024-11-20 14:46:54.392962] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.686 [2024-11-20 14:46:54.392965] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368b80) on tqpair=0x1306690 00:27:42.686 ===================================================== 00:27:42.686 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:42.686 ===================================================== 00:27:42.686 Controller Capabilities/Features 00:27:42.686 ================================ 00:27:42.686 Vendor ID: 8086 00:27:42.686 Subsystem Vendor ID: 8086 00:27:42.686 Serial Number: SPDK00000000000001 00:27:42.686 Model Number: SPDK bdev Controller 00:27:42.686 Firmware Version: 25.01 00:27:42.686 Recommended Arb Burst: 6 00:27:42.686 IEEE OUI Identifier: e4 d2 5c 00:27:42.686 Multi-path I/O 00:27:42.686 May have multiple subsystem ports: Yes 00:27:42.686 May have multiple controllers: Yes 00:27:42.686 Associated with SR-IOV VF: No 00:27:42.686 Max Data Transfer Size: 131072 00:27:42.686 Max Number of Namespaces: 32 00:27:42.686 Max Number of I/O Queues: 127 00:27:42.686 NVMe Specification Version (VS): 1.3 00:27:42.686 NVMe Specification Version (Identify): 1.3 00:27:42.686 Maximum Queue Entries: 128 00:27:42.686 Contiguous Queues Required: Yes 00:27:42.686 Arbitration Mechanisms Supported 00:27:42.686 Weighted Round Robin: Not Supported 00:27:42.686 Vendor Specific: Not Supported 00:27:42.686 Reset Timeout: 15000 ms 00:27:42.686 Doorbell Stride: 4 bytes 00:27:42.686 NVM Subsystem Reset: Not Supported 00:27:42.686 Command Sets Supported 00:27:42.686 NVM Command Set: Supported 00:27:42.686 Boot Partition: Not Supported 00:27:42.686 Memory Page Size Minimum: 4096 bytes 00:27:42.686 Memory Page Size Maximum: 4096 bytes 00:27:42.686 Persistent Memory Region: Not Supported 00:27:42.686 Optional Asynchronous Events Supported 00:27:42.686 Namespace Attribute Notices: Supported 00:27:42.686 Firmware Activation Notices: Not Supported 00:27:42.686 ANA Change Notices: Not Supported 00:27:42.686 PLE Aggregate Log Change Notices: Not Supported 00:27:42.686 LBA Status Info Alert Notices: Not Supported 00:27:42.686 EGE Aggregate Log Change Notices: Not Supported 00:27:42.686 Normal NVM Subsystem Shutdown event: Not Supported 00:27:42.686 Zone Descriptor Change Notices: Not Supported 00:27:42.686 Discovery Log Change Notices: Not Supported 00:27:42.686 Controller Attributes 00:27:42.686 128-bit Host Identifier: Supported 00:27:42.686 Non-Operational Permissive Mode: Not Supported 00:27:42.686 NVM Sets: Not Supported 00:27:42.686 Read Recovery Levels: Not Supported 00:27:42.686 Endurance Groups: Not Supported 00:27:42.686 Predictable Latency Mode: Not Supported 00:27:42.686 Traffic Based Keep ALive: Not Supported 00:27:42.686 Namespace Granularity: Not Supported 00:27:42.686 SQ Associations: Not Supported 00:27:42.686 UUID List: Not Supported 00:27:42.686 Multi-Domain Subsystem: Not Supported 00:27:42.686 Fixed Capacity Management: Not Supported 00:27:42.686 Variable Capacity Management: Not Supported 00:27:42.686 Delete Endurance Group: Not Supported 00:27:42.686 Delete NVM Set: Not Supported 00:27:42.686 Extended LBA Formats Supported: Not Supported 00:27:42.686 Flexible Data Placement Supported: Not Supported 00:27:42.686 00:27:42.686 Controller Memory Buffer Support 00:27:42.686 ================================ 00:27:42.686 Supported: No 00:27:42.686 00:27:42.686 Persistent Memory Region Support 00:27:42.686 ================================ 00:27:42.686 Supported: No 00:27:42.686 00:27:42.686 Admin Command Set Attributes 00:27:42.686 ============================ 00:27:42.686 Security Send/Receive: Not Supported 00:27:42.686 Format NVM: Not Supported 00:27:42.686 Firmware Activate/Download: Not Supported 00:27:42.686 Namespace Management: Not Supported 00:27:42.686 Device Self-Test: Not Supported 00:27:42.686 Directives: Not Supported 00:27:42.686 NVMe-MI: Not Supported 00:27:42.686 Virtualization Management: Not Supported 00:27:42.686 Doorbell Buffer Config: Not Supported 00:27:42.686 Get LBA Status Capability: Not Supported 00:27:42.686 Command & Feature Lockdown Capability: Not Supported 00:27:42.686 Abort Command Limit: 4 00:27:42.686 Async Event Request Limit: 4 00:27:42.686 Number of Firmware Slots: N/A 00:27:42.686 Firmware Slot 1 Read-Only: N/A 00:27:42.686 Firmware Activation Without Reset: N/A 00:27:42.686 Multiple Update Detection Support: N/A 00:27:42.686 Firmware Update Granularity: No Information Provided 00:27:42.686 Per-Namespace SMART Log: No 00:27:42.686 Asymmetric Namespace Access Log Page: Not Supported 00:27:42.686 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:42.686 Command Effects Log Page: Supported 00:27:42.686 Get Log Page Extended Data: Supported 00:27:42.686 Telemetry Log Pages: Not Supported 00:27:42.686 Persistent Event Log Pages: Not Supported 00:27:42.686 Supported Log Pages Log Page: May Support 00:27:42.686 Commands Supported & Effects Log Page: Not Supported 00:27:42.686 Feature Identifiers & Effects Log Page:May Support 00:27:42.686 NVMe-MI Commands & Effects Log Page: May Support 00:27:42.686 Data Area 4 for Telemetry Log: Not Supported 00:27:42.686 Error Log Page Entries Supported: 128 00:27:42.686 Keep Alive: Supported 00:27:42.686 Keep Alive Granularity: 10000 ms 00:27:42.686 00:27:42.686 NVM Command Set Attributes 00:27:42.686 ========================== 00:27:42.686 Submission Queue Entry Size 00:27:42.686 Max: 64 00:27:42.686 Min: 64 00:27:42.686 Completion Queue Entry Size 00:27:42.686 Max: 16 00:27:42.686 Min: 16 00:27:42.686 Number of Namespaces: 32 00:27:42.686 Compare Command: Supported 00:27:42.686 Write Uncorrectable Command: Not Supported 00:27:42.686 Dataset Management Command: Supported 00:27:42.686 Write Zeroes Command: Supported 00:27:42.686 Set Features Save Field: Not Supported 00:27:42.686 Reservations: Supported 00:27:42.686 Timestamp: Not Supported 00:27:42.686 Copy: Supported 00:27:42.686 Volatile Write Cache: Present 00:27:42.686 Atomic Write Unit (Normal): 1 00:27:42.686 Atomic Write Unit (PFail): 1 00:27:42.686 Atomic Compare & Write Unit: 1 00:27:42.686 Fused Compare & Write: Supported 00:27:42.686 Scatter-Gather List 00:27:42.686 SGL Command Set: Supported 00:27:42.686 SGL Keyed: Supported 00:27:42.686 SGL Bit Bucket Descriptor: Not Supported 00:27:42.686 SGL Metadata Pointer: Not Supported 00:27:42.687 Oversized SGL: Not Supported 00:27:42.687 SGL Metadata Address: Not Supported 00:27:42.687 SGL Offset: Supported 00:27:42.687 Transport SGL Data Block: Not Supported 00:27:42.687 Replay Protected Memory Block: Not Supported 00:27:42.687 00:27:42.687 Firmware Slot Information 00:27:42.687 ========================= 00:27:42.687 Active slot: 1 00:27:42.687 Slot 1 Firmware Revision: 25.01 00:27:42.687 00:27:42.687 00:27:42.687 Commands Supported and Effects 00:27:42.687 ============================== 00:27:42.687 Admin Commands 00:27:42.687 -------------- 00:27:42.687 Get Log Page (02h): Supported 00:27:42.687 Identify (06h): Supported 00:27:42.687 Abort (08h): Supported 00:27:42.687 Set Features (09h): Supported 00:27:42.687 Get Features (0Ah): Supported 00:27:42.687 Asynchronous Event Request (0Ch): Supported 00:27:42.687 Keep Alive (18h): Supported 00:27:42.687 I/O Commands 00:27:42.687 ------------ 00:27:42.687 Flush (00h): Supported LBA-Change 00:27:42.687 Write (01h): Supported LBA-Change 00:27:42.687 Read (02h): Supported 00:27:42.687 Compare (05h): Supported 00:27:42.687 Write Zeroes (08h): Supported LBA-Change 00:27:42.687 Dataset Management (09h): Supported LBA-Change 00:27:42.687 Copy (19h): Supported LBA-Change 00:27:42.687 00:27:42.687 Error Log 00:27:42.687 ========= 00:27:42.687 00:27:42.687 Arbitration 00:27:42.687 =========== 00:27:42.687 Arbitration Burst: 1 00:27:42.687 00:27:42.687 Power Management 00:27:42.687 ================ 00:27:42.687 Number of Power States: 1 00:27:42.687 Current Power State: Power State #0 00:27:42.687 Power State #0: 00:27:42.687 Max Power: 0.00 W 00:27:42.687 Non-Operational State: Operational 00:27:42.687 Entry Latency: Not Reported 00:27:42.687 Exit Latency: Not Reported 00:27:42.687 Relative Read Throughput: 0 00:27:42.687 Relative Read Latency: 0 00:27:42.687 Relative Write Throughput: 0 00:27:42.687 Relative Write Latency: 0 00:27:42.687 Idle Power: Not Reported 00:27:42.687 Active Power: Not Reported 00:27:42.687 Non-Operational Permissive Mode: Not Supported 00:27:42.687 00:27:42.687 Health Information 00:27:42.687 ================== 00:27:42.687 Critical Warnings: 00:27:42.687 Available Spare Space: OK 00:27:42.687 Temperature: OK 00:27:42.687 Device Reliability: OK 00:27:42.687 Read Only: No 00:27:42.687 Volatile Memory Backup: OK 00:27:42.687 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:42.687 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:42.687 Available Spare: 0% 00:27:42.687 Available Spare Threshold: 0% 00:27:42.687 Life Percentage Used:[2024-11-20 14:46:54.393049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.687 [2024-11-20 14:46:54.393054] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1306690) 00:27:42.687 [2024-11-20 14:46:54.393060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.687 [2024-11-20 14:46:54.393071] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368b80, cid 7, qid 0 00:27:42.687 [2024-11-20 14:46:54.393146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.687 [2024-11-20 14:46:54.393151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.687 [2024-11-20 14:46:54.393155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.687 [2024-11-20 14:46:54.393158] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368b80) on tqpair=0x1306690 00:27:42.687 [2024-11-20 14:46:54.393186] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:27:42.687 [2024-11-20 14:46:54.393195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368100) on tqpair=0x1306690 00:27:42.687 [2024-11-20 14:46:54.393201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.687 [2024-11-20 14:46:54.393205] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368280) on tqpair=0x1306690 00:27:42.687 [2024-11-20 14:46:54.393210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.687 [2024-11-20 14:46:54.393214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368400) on tqpair=0x1306690 00:27:42.687 [2024-11-20 14:46:54.393218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.687 [2024-11-20 14:46:54.393222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.687 [2024-11-20 14:46:54.393227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.687 [2024-11-20 14:46:54.393233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.687 [2024-11-20 14:46:54.393240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.687 [2024-11-20 14:46:54.393243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.687 [2024-11-20 14:46:54.393249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.687 [2024-11-20 14:46:54.393260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.687 [2024-11-20 14:46:54.393322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.687 [2024-11-20 14:46:54.393328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.687 [2024-11-20 14:46:54.393331] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.687 [2024-11-20 14:46:54.393334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.687 [2024-11-20 14:46:54.393340] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.687 [2024-11-20 14:46:54.393343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.687 [2024-11-20 14:46:54.393346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.687 [2024-11-20 14:46:54.393352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.687 [2024-11-20 14:46:54.393365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.687 [2024-11-20 14:46:54.393437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.687 [2024-11-20 14:46:54.393442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.687 [2024-11-20 14:46:54.393445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.687 [2024-11-20 14:46:54.393449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.687 [2024-11-20 14:46:54.393453] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:27:42.687 [2024-11-20 14:46:54.393457] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:27:42.687 [2024-11-20 14:46:54.393465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.687 [2024-11-20 14:46:54.393468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.687 [2024-11-20 14:46:54.393472] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.687 [2024-11-20 14:46:54.393477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.687 [2024-11-20 14:46:54.393487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.687 [2024-11-20 14:46:54.393548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.687 [2024-11-20 14:46:54.393554] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.687 [2024-11-20 14:46:54.393557] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.687 [2024-11-20 14:46:54.393561] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.687 [2024-11-20 14:46:54.393569] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.393572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.393575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.688 [2024-11-20 14:46:54.393581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.688 [2024-11-20 14:46:54.393591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.688 [2024-11-20 14:46:54.393657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.688 [2024-11-20 14:46:54.393662] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.688 [2024-11-20 14:46:54.393666] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.393671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.688 [2024-11-20 14:46:54.393679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.393682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.393685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.688 [2024-11-20 14:46:54.393691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.688 [2024-11-20 14:46:54.393701] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.688 [2024-11-20 14:46:54.393765] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.688 [2024-11-20 14:46:54.393770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.688 [2024-11-20 14:46:54.393773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.393776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.688 [2024-11-20 14:46:54.393785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.393788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.393791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.688 [2024-11-20 14:46:54.393797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.688 [2024-11-20 14:46:54.393807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.688 [2024-11-20 14:46:54.393874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.688 [2024-11-20 14:46:54.393879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.688 [2024-11-20 14:46:54.393882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.393885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.688 [2024-11-20 14:46:54.393894] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.393898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.393901] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.688 [2024-11-20 14:46:54.393907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.688 [2024-11-20 14:46:54.393916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.688 [2024-11-20 14:46:54.393984] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.688 [2024-11-20 14:46:54.393990] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.688 [2024-11-20 14:46:54.393993] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.393996] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.688 [2024-11-20 14:46:54.394004] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.394007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.394011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.688 [2024-11-20 14:46:54.394016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.688 [2024-11-20 14:46:54.394026] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.688 [2024-11-20 14:46:54.394099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.688 [2024-11-20 14:46:54.394105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.688 [2024-11-20 14:46:54.394108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.394111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.688 [2024-11-20 14:46:54.394122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.394125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.394128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.688 [2024-11-20 14:46:54.394134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.688 [2024-11-20 14:46:54.394144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.688 [2024-11-20 14:46:54.394209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.688 [2024-11-20 14:46:54.394214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.688 [2024-11-20 14:46:54.394218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.394221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.688 [2024-11-20 14:46:54.394229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.394232] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.394236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.688 [2024-11-20 14:46:54.394241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.688 [2024-11-20 14:46:54.394250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.688 [2024-11-20 14:46:54.394310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.688 [2024-11-20 14:46:54.394316] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.688 [2024-11-20 14:46:54.394319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.394322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.688 [2024-11-20 14:46:54.394331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.394334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.394337] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.688 [2024-11-20 14:46:54.394343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.688 [2024-11-20 14:46:54.394352] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.688 [2024-11-20 14:46:54.394410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.688 [2024-11-20 14:46:54.394416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.688 [2024-11-20 14:46:54.394419] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.394422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.688 [2024-11-20 14:46:54.394430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.394434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.394437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.688 [2024-11-20 14:46:54.394443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.688 [2024-11-20 14:46:54.394452] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.688 [2024-11-20 14:46:54.394511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.688 [2024-11-20 14:46:54.394517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.688 [2024-11-20 14:46:54.394520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.394523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.688 [2024-11-20 14:46:54.394531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.394537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.394540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.688 [2024-11-20 14:46:54.394545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.688 [2024-11-20 14:46:54.394555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.688 [2024-11-20 14:46:54.394623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.688 [2024-11-20 14:46:54.394629] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.688 [2024-11-20 14:46:54.394632] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.688 [2024-11-20 14:46:54.394635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.689 [2024-11-20 14:46:54.394644] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.394647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.394650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.689 [2024-11-20 14:46:54.394656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.689 [2024-11-20 14:46:54.394666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.689 [2024-11-20 14:46:54.394728] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.689 [2024-11-20 14:46:54.394734] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.689 [2024-11-20 14:46:54.394737] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.394740] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.689 [2024-11-20 14:46:54.394748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.394751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.394755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.689 [2024-11-20 14:46:54.394760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.689 [2024-11-20 14:46:54.394770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.689 [2024-11-20 14:46:54.394835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.689 [2024-11-20 14:46:54.394840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.689 [2024-11-20 14:46:54.394843] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.394846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.689 [2024-11-20 14:46:54.394855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.394858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.394861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.689 [2024-11-20 14:46:54.394867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.689 [2024-11-20 14:46:54.394876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.689 [2024-11-20 14:46:54.394943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.689 [2024-11-20 14:46:54.394953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.689 [2024-11-20 14:46:54.394956] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.394960] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.689 [2024-11-20 14:46:54.394968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.394971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.394976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.689 [2024-11-20 14:46:54.394982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.689 [2024-11-20 14:46:54.394992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.689 [2024-11-20 14:46:54.395050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.689 [2024-11-20 14:46:54.395056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.689 [2024-11-20 14:46:54.395059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.395062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.689 [2024-11-20 14:46:54.395070] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.395074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.395077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.689 [2024-11-20 14:46:54.395083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.689 [2024-11-20 14:46:54.395092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.689 [2024-11-20 14:46:54.395150] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.689 [2024-11-20 14:46:54.395156] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.689 [2024-11-20 14:46:54.395159] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.395163] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.689 [2024-11-20 14:46:54.395171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.395174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.395177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.689 [2024-11-20 14:46:54.395183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.689 [2024-11-20 14:46:54.395193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.689 [2024-11-20 14:46:54.395262] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.689 [2024-11-20 14:46:54.395267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.689 [2024-11-20 14:46:54.395270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.395274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.689 [2024-11-20 14:46:54.395283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.395287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.395290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.689 [2024-11-20 14:46:54.395295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.689 [2024-11-20 14:46:54.395305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.689 [2024-11-20 14:46:54.395366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.689 [2024-11-20 14:46:54.395372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.689 [2024-11-20 14:46:54.395375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.395378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.689 [2024-11-20 14:46:54.395386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.395390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.395393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.689 [2024-11-20 14:46:54.395400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.689 [2024-11-20 14:46:54.395410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.689 [2024-11-20 14:46:54.395474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.689 [2024-11-20 14:46:54.395479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.689 [2024-11-20 14:46:54.395482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.395486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.689 [2024-11-20 14:46:54.395495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.395498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.395502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.689 [2024-11-20 14:46:54.395507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.689 [2024-11-20 14:46:54.395518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.689 [2024-11-20 14:46:54.395581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.689 [2024-11-20 14:46:54.395587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.689 [2024-11-20 14:46:54.395590] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.395593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.689 [2024-11-20 14:46:54.395601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.395605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.689 [2024-11-20 14:46:54.395608] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.689 [2024-11-20 14:46:54.395614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.689 [2024-11-20 14:46:54.395623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.689 [2024-11-20 14:46:54.395690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.689 [2024-11-20 14:46:54.395695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.689 [2024-11-20 14:46:54.395698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.690 [2024-11-20 14:46:54.395702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.690 [2024-11-20 14:46:54.395710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.690 [2024-11-20 14:46:54.395714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.690 [2024-11-20 14:46:54.395717] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.690 [2024-11-20 14:46:54.395722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.690 [2024-11-20 14:46:54.395732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.690 [2024-11-20 14:46:54.395789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.690 [2024-11-20 14:46:54.395794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.690 [2024-11-20 14:46:54.395797] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.690 [2024-11-20 14:46:54.395801] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.690 [2024-11-20 14:46:54.395809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.690 [2024-11-20 14:46:54.395812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.690 [2024-11-20 14:46:54.395815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.690 [2024-11-20 14:46:54.395821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.690 [2024-11-20 14:46:54.395832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.690 [2024-11-20 14:46:54.395898] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.690 [2024-11-20 14:46:54.395904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.690 [2024-11-20 14:46:54.395907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.690 [2024-11-20 14:46:54.395910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.690 [2024-11-20 14:46:54.395919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.690 [2024-11-20 14:46:54.395922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.690 [2024-11-20 14:46:54.395926] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.690 [2024-11-20 14:46:54.395932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.690 [2024-11-20 14:46:54.395941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.690 [2024-11-20 14:46:54.399953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.690 [2024-11-20 14:46:54.399961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.690 [2024-11-20 14:46:54.399964] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.690 [2024-11-20 14:46:54.399967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.690 [2024-11-20 14:46:54.399978] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:42.690 [2024-11-20 14:46:54.399981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:42.690 [2024-11-20 14:46:54.399984] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1306690) 00:27:42.690 [2024-11-20 14:46:54.399990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.690 [2024-11-20 14:46:54.400001] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368580, cid 3, qid 0 00:27:42.690 [2024-11-20 14:46:54.400152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:42.690 [2024-11-20 14:46:54.400158] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:42.690 [2024-11-20 14:46:54.400161] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:42.690 [2024-11-20 14:46:54.400165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368580) on tqpair=0x1306690 00:27:42.690 [2024-11-20 14:46:54.400171] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:27:42.690 0% 00:27:42.690 Data Units Read: 0 00:27:42.690 Data Units Written: 0 00:27:42.690 Host Read Commands: 0 00:27:42.690 Host Write Commands: 0 00:27:42.690 Controller Busy Time: 0 minutes 00:27:42.690 Power Cycles: 0 00:27:42.690 Power On Hours: 0 hours 00:27:42.690 Unsafe Shutdowns: 0 00:27:42.690 Unrecoverable Media Errors: 0 00:27:42.690 Lifetime Error Log Entries: 0 00:27:42.690 Warning Temperature Time: 0 minutes 00:27:42.690 Critical Temperature Time: 0 minutes 00:27:42.690 00:27:42.690 Number of Queues 00:27:42.690 ================ 00:27:42.690 Number of I/O Submission Queues: 127 00:27:42.690 Number of I/O Completion Queues: 127 00:27:42.690 00:27:42.690 Active Namespaces 00:27:42.690 ================= 00:27:42.690 Namespace ID:1 00:27:42.690 Error Recovery Timeout: Unlimited 00:27:42.690 Command Set Identifier: NVM (00h) 00:27:42.690 Deallocate: Supported 00:27:42.690 Deallocated/Unwritten Error: Not Supported 00:27:42.690 Deallocated Read Value: Unknown 00:27:42.690 Deallocate in Write Zeroes: Not Supported 00:27:42.690 Deallocated Guard Field: 0xFFFF 00:27:42.690 Flush: Supported 00:27:42.690 Reservation: Supported 00:27:42.690 Namespace Sharing Capabilities: Multiple Controllers 00:27:42.690 Size (in LBAs): 131072 (0GiB) 00:27:42.690 Capacity (in LBAs): 131072 (0GiB) 00:27:42.690 Utilization (in LBAs): 131072 (0GiB) 00:27:42.690 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:42.690 EUI64: ABCDEF0123456789 00:27:42.690 UUID: f2a4d079-0590-4523-80ac-cbc3506fb565 00:27:42.690 Thin Provisioning: Not Supported 00:27:42.690 Per-NS Atomic Units: Yes 00:27:42.690 Atomic Boundary Size (Normal): 0 00:27:42.690 Atomic Boundary Size (PFail): 0 00:27:42.690 Atomic Boundary Offset: 0 00:27:42.690 Maximum Single Source Range Length: 65535 00:27:42.690 Maximum Copy Length: 65535 00:27:42.690 Maximum Source Range Count: 1 00:27:42.690 NGUID/EUI64 Never Reused: No 00:27:42.690 Namespace Write Protected: No 00:27:42.690 Number of LBA Formats: 1 00:27:42.690 Current LBA Format: LBA Format #00 00:27:42.690 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:42.690 00:27:42.690 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:42.690 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:42.690 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.690 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:42.690 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.690 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:42.690 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:42.690 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:42.690 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:27:42.691 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:42.691 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:27:42.691 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:42.691 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:42.691 rmmod nvme_tcp 00:27:42.691 rmmod nvme_fabrics 00:27:42.691 rmmod nvme_keyring 00:27:42.691 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:42.691 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:27:42.691 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:27:42.691 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1677872 ']' 00:27:42.691 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1677872 00:27:42.691 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1677872 ']' 00:27:42.691 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1677872 00:27:42.691 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:27:42.691 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:42.691 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1677872 00:27:42.691 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:42.691 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:42.691 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1677872' 00:27:42.691 killing process with pid 1677872 00:27:42.691 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1677872 00:27:42.691 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1677872 00:27:42.950 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:42.950 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:42.950 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:42.950 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:27:42.950 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:27:42.950 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:42.950 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:27:42.950 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:42.950 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:42.950 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.950 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:42.950 14:46:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.857 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:44.857 00:27:44.857 real 0m9.966s 00:27:44.857 user 0m8.201s 00:27:44.857 sys 0m4.888s 00:27:44.857 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:44.857 14:46:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:44.857 ************************************ 00:27:44.857 END TEST nvmf_identify 00:27:44.857 ************************************ 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.117 ************************************ 00:27:45.117 START TEST nvmf_perf 00:27:45.117 ************************************ 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:45.117 * Looking for test storage... 00:27:45.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:27:45.117 14:46:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:45.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.117 --rc genhtml_branch_coverage=1 00:27:45.117 --rc genhtml_function_coverage=1 00:27:45.117 --rc genhtml_legend=1 00:27:45.117 --rc geninfo_all_blocks=1 00:27:45.117 --rc geninfo_unexecuted_blocks=1 00:27:45.117 00:27:45.117 ' 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:45.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.117 --rc genhtml_branch_coverage=1 00:27:45.117 --rc genhtml_function_coverage=1 00:27:45.117 --rc genhtml_legend=1 00:27:45.117 --rc geninfo_all_blocks=1 00:27:45.117 --rc geninfo_unexecuted_blocks=1 00:27:45.117 00:27:45.117 ' 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:45.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.117 --rc genhtml_branch_coverage=1 00:27:45.117 --rc genhtml_function_coverage=1 00:27:45.117 --rc genhtml_legend=1 00:27:45.117 --rc geninfo_all_blocks=1 00:27:45.117 --rc geninfo_unexecuted_blocks=1 00:27:45.117 00:27:45.117 ' 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:45.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.117 --rc genhtml_branch_coverage=1 00:27:45.117 --rc genhtml_function_coverage=1 00:27:45.117 --rc genhtml_legend=1 00:27:45.117 --rc geninfo_all_blocks=1 00:27:45.117 --rc geninfo_unexecuted_blocks=1 00:27:45.117 00:27:45.117 ' 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.117 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:45.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:45.118 14:46:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:51.692 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:51.693 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:51.693 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:51.693 Found net devices under 0000:86:00.0: cvl_0_0 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:51.693 Found net devices under 0000:86:00.1: cvl_0_1 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:51.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:27:51.693 00:27:51.693 --- 10.0.0.2 ping statistics --- 00:27:51.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.693 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:51.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:27:51.693 00:27:51.693 --- 10.0.0.1 ping statistics --- 00:27:51.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.693 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1681624 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1681624 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1681624 ']' 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:51.693 14:47:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:51.693 [2024-11-20 14:47:03.026615] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:27:51.693 [2024-11-20 14:47:03.026666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.693 [2024-11-20 14:47:03.106562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:51.693 [2024-11-20 14:47:03.150106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.693 [2024-11-20 14:47:03.150142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.693 [2024-11-20 14:47:03.150149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.693 [2024-11-20 14:47:03.150155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.693 [2024-11-20 14:47:03.150160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.693 [2024-11-20 14:47:03.151806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.693 [2024-11-20 14:47:03.151920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.693 [2024-11-20 14:47:03.152028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.693 [2024-11-20 14:47:03.152028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:51.693 14:47:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:51.693 14:47:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:27:51.693 14:47:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:51.693 14:47:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:51.693 14:47:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:51.694 14:47:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:51.694 14:47:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:51.694 14:47:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:54.975 14:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:54.975 14:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:54.975 14:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:27:54.975 14:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:54.975 14:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:54.975 14:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:27:54.975 14:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:54.975 14:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:54.975 14:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:55.233 [2024-11-20 14:47:06.945353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.233 14:47:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:55.233 14:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:55.233 14:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:55.492 14:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:55.492 14:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:55.750 14:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:56.008 [2024-11-20 14:47:07.748289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:56.008 14:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:56.267 14:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:27:56.267 14:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:27:56.267 14:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:56.267 14:47:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:27:57.641 Initializing NVMe Controllers 00:27:57.641 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:27:57.641 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:27:57.641 Initialization complete. Launching workers. 00:27:57.641 ======================================================== 00:27:57.641 Latency(us) 00:27:57.641 Device Information : IOPS MiB/s Average min max 00:27:57.641 PCIE (0000:5e:00.0) NSID 1 from core 0: 98063.25 383.06 325.87 20.32 4312.45 00:27:57.641 ======================================================== 00:27:57.641 Total : 98063.25 383.06 325.87 20.32 4312.45 00:27:57.641 00:27:57.641 14:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:59.015 Initializing NVMe Controllers 00:27:59.015 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:59.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:59.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:59.015 Initialization complete. Launching workers. 00:27:59.015 ======================================================== 00:27:59.015 Latency(us) 00:27:59.015 Device Information : IOPS MiB/s Average min max 00:27:59.015 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 116.00 0.45 8818.06 109.64 45756.59 00:27:59.015 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 63.00 0.25 16201.28 7189.34 47886.89 00:27:59.015 ======================================================== 00:27:59.015 Total : 179.00 0.70 11416.62 109.64 47886.89 00:27:59.015 00:27:59.016 14:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:59.947 Initializing NVMe Controllers 00:27:59.947 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:59.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:59.947 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:59.947 Initialization complete. Launching workers. 00:27:59.947 ======================================================== 00:27:59.947 Latency(us) 00:27:59.947 Device Information : IOPS MiB/s Average min max 00:27:59.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10801.51 42.19 2970.33 452.84 7954.18 00:27:59.947 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3877.83 15.15 8286.33 7116.76 16026.66 00:27:59.947 ======================================================== 00:27:59.947 Total : 14679.34 57.34 4374.66 452.84 16026.66 00:27:59.947 00:27:59.947 14:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:59.947 14:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:59.947 14:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:02.474 Initializing NVMe Controllers 00:28:02.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:02.474 Controller IO queue size 128, less than required. 00:28:02.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:02.474 Controller IO queue size 128, less than required. 00:28:02.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:02.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:02.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:02.474 Initialization complete. Launching workers. 00:28:02.474 ======================================================== 00:28:02.474 Latency(us) 00:28:02.474 Device Information : IOPS MiB/s Average min max 00:28:02.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1780.93 445.23 73214.87 55647.90 111142.96 00:28:02.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 596.64 149.16 219604.76 97851.43 345837.57 00:28:02.474 ======================================================== 00:28:02.474 Total : 2377.56 594.39 109950.56 55647.90 345837.57 00:28:02.474 00:28:02.474 14:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:02.732 No valid NVMe controllers or AIO or URING devices found 00:28:02.732 Initializing NVMe Controllers 00:28:02.732 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:02.732 Controller IO queue size 128, less than required. 00:28:02.732 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:02.732 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:02.732 Controller IO queue size 128, less than required. 00:28:02.732 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:02.732 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:02.732 WARNING: Some requested NVMe devices were skipped 00:28:02.732 14:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:05.260 Initializing NVMe Controllers 00:28:05.260 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:05.260 Controller IO queue size 128, less than required. 00:28:05.260 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:05.260 Controller IO queue size 128, less than required. 00:28:05.260 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:05.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:05.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:05.260 Initialization complete. Launching workers. 00:28:05.260 00:28:05.260 ==================== 00:28:05.260 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:05.261 TCP transport: 00:28:05.261 polls: 10933 00:28:05.261 idle_polls: 7580 00:28:05.261 sock_completions: 3353 00:28:05.261 nvme_completions: 6247 00:28:05.261 submitted_requests: 9362 00:28:05.261 queued_requests: 1 00:28:05.261 00:28:05.261 ==================== 00:28:05.261 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:05.261 TCP transport: 00:28:05.261 polls: 14897 00:28:05.261 idle_polls: 11366 00:28:05.261 sock_completions: 3531 00:28:05.261 nvme_completions: 6393 00:28:05.261 submitted_requests: 9564 00:28:05.261 queued_requests: 1 00:28:05.261 ======================================================== 00:28:05.261 Latency(us) 00:28:05.261 Device Information : IOPS MiB/s Average min max 00:28:05.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1559.03 389.76 84081.29 46501.45 142981.92 00:28:05.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1595.47 398.87 80536.00 46668.93 108224.07 00:28:05.261 ======================================================== 00:28:05.261 Total : 3154.50 788.62 82288.17 46501.45 142981.92 00:28:05.261 00:28:05.261 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:05.261 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:05.519 rmmod nvme_tcp 00:28:05.519 rmmod nvme_fabrics 00:28:05.519 rmmod nvme_keyring 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1681624 ']' 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1681624 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1681624 ']' 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1681624 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1681624 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1681624' 00:28:05.519 killing process with pid 1681624 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1681624 00:28:05.519 14:47:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1681624 00:28:06.892 14:47:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:06.892 14:47:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:06.892 14:47:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:06.892 14:47:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:28:06.892 14:47:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:28:06.892 14:47:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:06.892 14:47:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:28:07.151 14:47:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:07.151 14:47:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:07.151 14:47:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.151 14:47:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.151 14:47:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.062 14:47:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:09.062 00:28:09.062 real 0m24.087s 00:28:09.062 user 1m2.474s 00:28:09.062 sys 0m8.237s 00:28:09.062 14:47:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:09.062 14:47:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:09.062 ************************************ 00:28:09.062 END TEST nvmf_perf 00:28:09.062 ************************************ 00:28:09.062 14:47:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:09.062 14:47:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:09.062 14:47:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:09.062 14:47:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.062 ************************************ 00:28:09.062 START TEST nvmf_fio_host 00:28:09.062 ************************************ 00:28:09.062 14:47:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:09.322 * Looking for test storage... 00:28:09.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:09.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.322 --rc genhtml_branch_coverage=1 00:28:09.322 --rc genhtml_function_coverage=1 00:28:09.322 --rc genhtml_legend=1 00:28:09.322 --rc geninfo_all_blocks=1 00:28:09.322 --rc geninfo_unexecuted_blocks=1 00:28:09.322 00:28:09.322 ' 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:09.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.322 --rc genhtml_branch_coverage=1 00:28:09.322 --rc genhtml_function_coverage=1 00:28:09.322 --rc genhtml_legend=1 00:28:09.322 --rc geninfo_all_blocks=1 00:28:09.322 --rc geninfo_unexecuted_blocks=1 00:28:09.322 00:28:09.322 ' 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:09.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.322 --rc genhtml_branch_coverage=1 00:28:09.322 --rc genhtml_function_coverage=1 00:28:09.322 --rc genhtml_legend=1 00:28:09.322 --rc geninfo_all_blocks=1 00:28:09.322 --rc geninfo_unexecuted_blocks=1 00:28:09.322 00:28:09.322 ' 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:09.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.322 --rc genhtml_branch_coverage=1 00:28:09.322 --rc genhtml_function_coverage=1 00:28:09.322 --rc genhtml_legend=1 00:28:09.322 --rc geninfo_all_blocks=1 00:28:09.322 --rc geninfo_unexecuted_blocks=1 00:28:09.322 00:28:09.322 ' 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.322 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:09.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:09.323 14:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.894 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:15.894 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:15.894 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:15.894 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:15.894 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:15.894 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:15.894 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:15.894 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:15.894 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:15.894 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:28:15.894 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:15.894 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:28:15.894 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:15.894 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:28:15.894 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:15.894 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.894 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.894 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.894 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:15.895 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:15.895 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:15.895 Found net devices under 0000:86:00.0: cvl_0_0 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:15.895 Found net devices under 0000:86:00.1: cvl_0_1 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:15.895 14:47:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:15.895 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:15.895 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:15.895 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:15.895 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:15.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:15.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:28:15.896 00:28:15.896 --- 10.0.0.2 ping statistics --- 00:28:15.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.896 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:15.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:15.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:28:15.896 00:28:15.896 --- 10.0.0.1 ping statistics --- 00:28:15.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.896 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1687656 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1687656 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1687656 ']' 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.896 [2024-11-20 14:47:27.204579] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:28:15.896 [2024-11-20 14:47:27.204629] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.896 [2024-11-20 14:47:27.287068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:15.896 [2024-11-20 14:47:27.329990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.896 [2024-11-20 14:47:27.330028] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.896 [2024-11-20 14:47:27.330035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:15.896 [2024-11-20 14:47:27.330042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:15.896 [2024-11-20 14:47:27.330047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.896 [2024-11-20 14:47:27.331657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.896 [2024-11-20 14:47:27.331765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:15.896 [2024-11-20 14:47:27.331870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.896 [2024-11-20 14:47:27.331871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:15.896 [2024-11-20 14:47:27.599157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.896 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:16.153 Malloc1 00:28:16.154 14:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:16.154 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:16.411 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:16.669 [2024-11-20 14:47:28.484193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.669 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:16.928 14:47:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:17.185 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:17.185 fio-3.35 00:28:17.185 Starting 1 thread 00:28:19.712 00:28:19.712 test: (groupid=0, jobs=1): err= 0: pid=1688030: Wed Nov 20 14:47:31 2024 00:28:19.712 read: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(90.2MiB/2005msec) 00:28:19.712 slat (nsec): min=1577, max=239088, avg=1735.90, stdev=2216.44 00:28:19.712 clat (usec): min=3178, max=10267, avg=6141.96, stdev=493.73 00:28:19.712 lat (usec): min=3211, max=10269, avg=6143.70, stdev=493.65 00:28:19.712 clat percentiles (usec): 00:28:19.712 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:28:19.712 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6128], 60.00th=[ 6259], 00:28:19.712 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6915], 00:28:19.712 | 99.00th=[ 7242], 99.50th=[ 7570], 99.90th=[ 9241], 99.95th=[ 9503], 00:28:19.712 | 99.99th=[10290] 00:28:19.712 bw ( KiB/s): min=45424, max=46488, per=99.95%, avg=46034.00, stdev=446.68, samples=4 00:28:19.712 iops : min=11356, max=11622, avg=11508.50, stdev=111.67, samples=4 00:28:19.712 write: IOPS=11.4k, BW=44.7MiB/s (46.8MB/s)(89.5MiB/2005msec); 0 zone resets 00:28:19.712 slat (nsec): min=1618, max=226424, avg=1800.78, stdev=1677.23 00:28:19.712 clat (usec): min=2420, max=9404, avg=4969.84, stdev=402.24 00:28:19.712 lat (usec): min=2435, max=9406, avg=4971.64, stdev=402.26 00:28:19.712 clat percentiles (usec): 00:28:19.712 | 1.00th=[ 4080], 5.00th=[ 4359], 10.00th=[ 4490], 20.00th=[ 4686], 00:28:19.712 | 30.00th=[ 4752], 40.00th=[ 4883], 50.00th=[ 4948], 60.00th=[ 5080], 00:28:19.712 | 70.00th=[ 5145], 80.00th=[ 5276], 90.00th=[ 5407], 95.00th=[ 5538], 00:28:19.712 | 99.00th=[ 5932], 99.50th=[ 6259], 99.90th=[ 7635], 99.95th=[ 8979], 00:28:19.712 | 99.99th=[ 9241] 00:28:19.712 bw ( KiB/s): min=45248, max=46272, per=100.00%, avg=45732.00, stdev=421.78, samples=4 00:28:19.712 iops : min=11312, max=11568, avg=11433.00, stdev=105.45, samples=4 00:28:19.712 lat (msec) : 4=0.32%, 10=99.67%, 20=0.02% 00:28:19.712 cpu : usr=74.05%, sys=24.95%, ctx=101, majf=0, minf=3 00:28:19.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:19.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:19.712 issued rwts: total=23086,22924,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:19.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:19.712 00:28:19.712 Run status group 0 (all jobs): 00:28:19.712 READ: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=90.2MiB (94.6MB), run=2005-2005msec 00:28:19.712 WRITE: bw=44.7MiB/s (46.8MB/s), 44.7MiB/s-44.7MiB/s (46.8MB/s-46.8MB/s), io=89.5MiB (93.9MB), run=2005-2005msec 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:19.712 14:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:19.970 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:19.970 fio-3.35 00:28:19.970 Starting 1 thread 00:28:22.501 00:28:22.501 test: (groupid=0, jobs=1): err= 0: pid=1688592: Wed Nov 20 14:47:33 2024 00:28:22.501 read: IOPS=10.5k, BW=164MiB/s (172MB/s)(330MiB/2007msec) 00:28:22.501 slat (nsec): min=2583, max=84064, avg=2929.11, stdev=1314.02 00:28:22.501 clat (usec): min=1348, max=51220, avg=7130.87, stdev=3488.84 00:28:22.501 lat (usec): min=1351, max=51223, avg=7133.80, stdev=3488.88 00:28:22.501 clat percentiles (usec): 00:28:22.501 | 1.00th=[ 3621], 5.00th=[ 4359], 10.00th=[ 4883], 20.00th=[ 5538], 00:28:22.501 | 30.00th=[ 5997], 40.00th=[ 6390], 50.00th=[ 6849], 60.00th=[ 7373], 00:28:22.501 | 70.00th=[ 7701], 80.00th=[ 8160], 90.00th=[ 8848], 95.00th=[ 9634], 00:28:22.501 | 99.00th=[12256], 99.50th=[44827], 99.90th=[49021], 99.95th=[49546], 00:28:22.501 | 99.99th=[50070] 00:28:22.501 bw ( KiB/s): min=80416, max=93184, per=50.78%, avg=85472.00, stdev=5470.97, samples=4 00:28:22.501 iops : min= 5026, max= 5824, avg=5342.00, stdev=341.94, samples=4 00:28:22.501 write: IOPS=6313, BW=98.6MiB/s (103MB/s)(174MiB/1766msec); 0 zone resets 00:28:22.501 slat (usec): min=29, max=387, avg=31.92, stdev= 7.33 00:28:22.501 clat (usec): min=2901, max=14915, avg=8738.56, stdev=1602.95 00:28:22.501 lat (usec): min=2932, max=15026, avg=8770.48, stdev=1604.26 00:28:22.501 clat percentiles (usec): 00:28:22.501 | 1.00th=[ 5604], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7373], 00:28:22.501 | 30.00th=[ 7767], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8979], 00:28:22.501 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[11076], 95.00th=[11731], 00:28:22.501 | 99.00th=[12780], 99.50th=[13173], 99.90th=[14353], 99.95th=[14615], 00:28:22.501 | 99.99th=[14877] 00:28:22.501 bw ( KiB/s): min=82976, max=97280, per=87.98%, avg=88872.00, stdev=6019.03, samples=4 00:28:22.501 iops : min= 5186, max= 6080, avg=5554.50, stdev=376.19, samples=4 00:28:22.501 lat (msec) : 2=0.07%, 4=1.59%, 10=88.74%, 20=9.21%, 50=0.38% 00:28:22.501 lat (msec) : 100=0.01% 00:28:22.501 cpu : usr=86.89%, sys=12.41%, ctx=46, majf=0, minf=3 00:28:22.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:28:22.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:22.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:22.501 issued rwts: total=21115,11149,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:22.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:22.501 00:28:22.501 Run status group 0 (all jobs): 00:28:22.501 READ: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=330MiB (346MB), run=2007-2007msec 00:28:22.501 WRITE: bw=98.6MiB/s (103MB/s), 98.6MiB/s-98.6MiB/s (103MB/s-103MB/s), io=174MiB (183MB), run=1766-1766msec 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:22.501 rmmod nvme_tcp 00:28:22.501 rmmod nvme_fabrics 00:28:22.501 rmmod nvme_keyring 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1687656 ']' 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1687656 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1687656 ']' 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1687656 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1687656 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1687656' 00:28:22.501 killing process with pid 1687656 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1687656 00:28:22.501 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1687656 00:28:22.760 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:22.760 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:22.760 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:22.760 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:28:22.760 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:28:22.760 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:22.760 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:22.760 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:22.760 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:22.760 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.760 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.760 14:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.664 14:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:24.664 00:28:24.664 real 0m15.636s 00:28:24.664 user 0m45.921s 00:28:24.664 sys 0m6.512s 00:28:24.664 14:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:24.664 14:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.664 ************************************ 00:28:24.664 END TEST nvmf_fio_host 00:28:24.664 ************************************ 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.924 ************************************ 00:28:24.924 START TEST nvmf_failover 00:28:24.924 ************************************ 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:24.924 * Looking for test storage... 00:28:24.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:24.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.924 --rc genhtml_branch_coverage=1 00:28:24.924 --rc genhtml_function_coverage=1 00:28:24.924 --rc genhtml_legend=1 00:28:24.924 --rc geninfo_all_blocks=1 00:28:24.924 --rc geninfo_unexecuted_blocks=1 00:28:24.924 00:28:24.924 ' 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:24.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.924 --rc genhtml_branch_coverage=1 00:28:24.924 --rc genhtml_function_coverage=1 00:28:24.924 --rc genhtml_legend=1 00:28:24.924 --rc geninfo_all_blocks=1 00:28:24.924 --rc geninfo_unexecuted_blocks=1 00:28:24.924 00:28:24.924 ' 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:24.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.924 --rc genhtml_branch_coverage=1 00:28:24.924 --rc genhtml_function_coverage=1 00:28:24.924 --rc genhtml_legend=1 00:28:24.924 --rc geninfo_all_blocks=1 00:28:24.924 --rc geninfo_unexecuted_blocks=1 00:28:24.924 00:28:24.924 ' 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:24.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.924 --rc genhtml_branch_coverage=1 00:28:24.924 --rc genhtml_function_coverage=1 00:28:24.924 --rc genhtml_legend=1 00:28:24.924 --rc geninfo_all_blocks=1 00:28:24.924 --rc geninfo_unexecuted_blocks=1 00:28:24.924 00:28:24.924 ' 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:28:24.924 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:24.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:28:24.925 14:47:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:31.493 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:31.493 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.493 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:31.494 Found net devices under 0000:86:00.0: cvl_0_0 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:31.494 Found net devices under 0000:86:00.1: cvl_0_1 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:31.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:31.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:28:31.494 00:28:31.494 --- 10.0.0.2 ping statistics --- 00:28:31.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.494 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:31.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:31.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:28:31.494 00:28:31.494 --- 10.0.0.1 ping statistics --- 00:28:31.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.494 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1692454 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1692454 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1692454 ']' 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:31.494 [2024-11-20 14:47:42.771038] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:28:31.494 [2024-11-20 14:47:42.771083] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:31.494 [2024-11-20 14:47:42.848807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:31.494 [2024-11-20 14:47:42.891509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:31.494 [2024-11-20 14:47:42.891548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:31.494 [2024-11-20 14:47:42.891555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:31.494 [2024-11-20 14:47:42.891560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:31.494 [2024-11-20 14:47:42.891566] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:31.494 [2024-11-20 14:47:42.893024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:31.494 [2024-11-20 14:47:42.893130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.494 [2024-11-20 14:47:42.893131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:31.494 14:47:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:31.494 14:47:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:31.494 14:47:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:31.494 [2024-11-20 14:47:43.195884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:31.494 14:47:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:31.494 Malloc0 00:28:31.494 14:47:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:31.752 14:47:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:32.009 14:47:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:32.266 [2024-11-20 14:47:44.001222] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.266 14:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:32.266 [2024-11-20 14:47:44.209798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:32.522 14:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:32.522 [2024-11-20 14:47:44.410455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:32.522 14:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:32.522 14:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1692788 00:28:32.522 14:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:32.522 14:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1692788 /var/tmp/bdevperf.sock 00:28:32.522 14:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1692788 ']' 00:28:32.522 14:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:32.522 14:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.522 14:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:32.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:32.522 14:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.522 14:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:32.780 14:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:32.780 14:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:28:32.780 14:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:33.037 NVMe0n1 00:28:33.037 14:47:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:33.294 00:28:33.294 14:47:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:33.294 14:47:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1692794 00:28:33.294 14:47:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:28:34.298 14:47:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:34.576 [2024-11-20 14:47:46.384832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.576 [2024-11-20 14:47:46.384880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.576 [2024-11-20 14:47:46.384888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.576 [2024-11-20 14:47:46.384895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.384901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.384907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.384913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.384919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.384925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.384930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.384936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.384942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.384954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.384960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.384967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.384973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.384979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.384985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.384990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.384996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 [2024-11-20 14:47:46.385253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c62d0 is same with the state(6) to be set 00:28:34.577 14:47:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:28:37.870 14:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:37.870 00:28:37.870 14:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:38.127 14:47:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:28:41.405 14:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:41.405 [2024-11-20 14:47:53.157999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.405 14:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:28:42.337 14:47:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:42.601 [2024-11-20 14:47:54.389989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7ce0 is same with the state(6) to be set 00:28:42.601 [2024-11-20 14:47:54.390026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7ce0 is same with the state(6) to be set 00:28:42.601 [2024-11-20 14:47:54.390034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7ce0 is same with the state(6) to be set 00:28:42.601 [2024-11-20 14:47:54.390041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7ce0 is same with the state(6) to be set 00:28:42.601 [2024-11-20 14:47:54.390047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7ce0 is same with the state(6) to be set 00:28:42.601 [2024-11-20 14:47:54.390053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7ce0 is same with the state(6) to be set 00:28:42.601 [2024-11-20 14:47:54.390059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7ce0 is same with the state(6) to be set 00:28:42.601 [2024-11-20 14:47:54.390065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c7ce0 is same with the state(6) to be set 00:28:42.601 14:47:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1692794 00:28:49.168 { 00:28:49.168 "results": [ 00:28:49.168 { 00:28:49.168 "job": "NVMe0n1", 00:28:49.168 "core_mask": "0x1", 00:28:49.168 "workload": "verify", 00:28:49.168 "status": "finished", 00:28:49.168 "verify_range": { 00:28:49.168 "start": 0, 00:28:49.168 "length": 16384 00:28:49.168 }, 00:28:49.168 "queue_depth": 128, 00:28:49.168 "io_size": 4096, 00:28:49.168 "runtime": 15.002676, 00:28:49.168 "iops": 10835.60026224655, 00:28:49.168 "mibps": 42.326563524400584, 00:28:49.168 "io_failed": 8101, 00:28:49.168 "io_timeout": 0, 00:28:49.168 "avg_latency_us": 11229.455561112707, 00:28:49.168 "min_latency_us": 429.1895652173913, 00:28:49.168 "max_latency_us": 23478.98434782609 00:28:49.168 } 00:28:49.168 ], 00:28:49.168 "core_count": 1 00:28:49.168 } 00:28:49.168 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1692788 00:28:49.168 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1692788 ']' 00:28:49.168 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1692788 00:28:49.168 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:28:49.168 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:49.168 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1692788 00:28:49.168 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:49.168 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:49.168 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1692788' 00:28:49.168 killing process with pid 1692788 00:28:49.168 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1692788 00:28:49.168 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1692788 00:28:49.168 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:49.168 [2024-11-20 14:47:44.472725] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:28:49.168 [2024-11-20 14:47:44.472779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1692788 ] 00:28:49.168 [2024-11-20 14:47:44.546780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.168 [2024-11-20 14:47:44.589025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.168 Running I/O for 15 seconds... 00:28:49.168 10916.00 IOPS, 42.64 MiB/s [2024-11-20T13:48:01.126Z] [2024-11-20 14:47:46.385987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.168 [2024-11-20 14:47:46.386020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.168 [2024-11-20 14:47:46.386036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.168 [2024-11-20 14:47:46.386043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.168 [2024-11-20 14:47:46.386052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.168 [2024-11-20 14:47:46.386060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.168 [2024-11-20 14:47:46.386068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.168 [2024-11-20 14:47:46.386075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.168 [2024-11-20 14:47:46.386083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.168 [2024-11-20 14:47:46.386090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.168 [2024-11-20 14:47:46.386098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.168 [2024-11-20 14:47:46.386104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.168 [2024-11-20 14:47:46.386113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.168 [2024-11-20 14:47:46.386119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.168 [2024-11-20 14:47:46.386128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.168 [2024-11-20 14:47:46.386135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.168 [2024-11-20 14:47:46.386143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.168 [2024-11-20 14:47:46.386149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.168 [2024-11-20 14:47:46.386158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.168 [2024-11-20 14:47:46.386164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.168 [2024-11-20 14:47:46.386172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.168 [2024-11-20 14:47:46.386179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.168 [2024-11-20 14:47:46.386192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.168 [2024-11-20 14:47:46.386199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.168 [2024-11-20 14:47:46.386208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.168 [2024-11-20 14:47:46.386214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.168 [2024-11-20 14:47:46.386223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.168 [2024-11-20 14:47:46.386229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.168 [2024-11-20 14:47:46.386237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.168 [2024-11-20 14:47:46.386244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.168 [2024-11-20 14:47:46.386252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.168 [2024-11-20 14:47:46.386259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.168 [2024-11-20 14:47:46.386267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.168 [2024-11-20 14:47:46.386275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.168 [2024-11-20 14:47:46.386283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.168 [2024-11-20 14:47:46.386290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.168 [2024-11-20 14:47:46.386297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.169 [2024-11-20 14:47:46.386305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.169 [2024-11-20 14:47:46.386319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.169 [2024-11-20 14:47:46.386334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.169 [2024-11-20 14:47:46.386348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.169 [2024-11-20 14:47:46.386675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.169 [2024-11-20 14:47:46.386881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.169 [2024-11-20 14:47:46.386889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.386896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.386904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.386912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.386920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.386927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.386934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.386952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.386961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.386967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.386975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.386982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.386989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.386996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.170 [2024-11-20 14:47:46.387478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.170 [2024-11-20 14:47:46.387485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.171 [2024-11-20 14:47:46.387499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.171 [2024-11-20 14:47:46.387515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.171 [2024-11-20 14:47:46.387529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.171 [2024-11-20 14:47:46.387544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.171 [2024-11-20 14:47:46.387558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.171 [2024-11-20 14:47:46.387572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.171 [2024-11-20 14:47:46.387586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.171 [2024-11-20 14:47:46.387600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.171 [2024-11-20 14:47:46.387615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.171 [2024-11-20 14:47:46.387644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96544 len:8 PRP1 0x0 PRP2 0x0 00:28:49.171 [2024-11-20 14:47:46.387651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.171 [2024-11-20 14:47:46.387665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.171 [2024-11-20 14:47:46.387671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96552 len:8 PRP1 0x0 PRP2 0x0 00:28:49.171 [2024-11-20 14:47:46.387677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.171 [2024-11-20 14:47:46.387689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.171 [2024-11-20 14:47:46.387694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96560 len:8 PRP1 0x0 PRP2 0x0 00:28:49.171 [2024-11-20 14:47:46.387700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.171 [2024-11-20 14:47:46.387716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.171 [2024-11-20 14:47:46.387721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96568 len:8 PRP1 0x0 PRP2 0x0 00:28:49.171 [2024-11-20 14:47:46.387727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.171 [2024-11-20 14:47:46.387738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.171 [2024-11-20 14:47:46.387744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96576 len:8 PRP1 0x0 PRP2 0x0 00:28:49.171 [2024-11-20 14:47:46.387751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.171 [2024-11-20 14:47:46.387762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.171 [2024-11-20 14:47:46.387767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96584 len:8 PRP1 0x0 PRP2 0x0 00:28:49.171 [2024-11-20 14:47:46.387773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.171 [2024-11-20 14:47:46.387784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.171 [2024-11-20 14:47:46.387790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96592 len:8 PRP1 0x0 PRP2 0x0 00:28:49.171 [2024-11-20 14:47:46.387796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.171 [2024-11-20 14:47:46.387807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.171 [2024-11-20 14:47:46.387812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96600 len:8 PRP1 0x0 PRP2 0x0 00:28:49.171 [2024-11-20 14:47:46.387819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.171 [2024-11-20 14:47:46.387830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.171 [2024-11-20 14:47:46.387835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96608 len:8 PRP1 0x0 PRP2 0x0 00:28:49.171 [2024-11-20 14:47:46.387841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.171 [2024-11-20 14:47:46.387852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.171 [2024-11-20 14:47:46.387858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96616 len:8 PRP1 0x0 PRP2 0x0 00:28:49.171 [2024-11-20 14:47:46.387864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.171 [2024-11-20 14:47:46.387877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.171 [2024-11-20 14:47:46.387882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96624 len:8 PRP1 0x0 PRP2 0x0 00:28:49.171 [2024-11-20 14:47:46.387888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.171 [2024-11-20 14:47:46.387903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.171 [2024-11-20 14:47:46.387908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96632 len:8 PRP1 0x0 PRP2 0x0 00:28:49.171 [2024-11-20 14:47:46.387914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.171 [2024-11-20 14:47:46.387926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.171 [2024-11-20 14:47:46.387932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96640 len:8 PRP1 0x0 PRP2 0x0 00:28:49.171 [2024-11-20 14:47:46.387938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.171 [2024-11-20 14:47:46.387955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.171 [2024-11-20 14:47:46.387960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96648 len:8 PRP1 0x0 PRP2 0x0 00:28:49.171 [2024-11-20 14:47:46.387966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.171 [2024-11-20 14:47:46.387978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.171 [2024-11-20 14:47:46.387983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95816 len:8 PRP1 0x0 PRP2 0x0 00:28:49.171 [2024-11-20 14:47:46.387989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.387996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.171 [2024-11-20 14:47:46.388000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.171 [2024-11-20 14:47:46.388006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95824 len:8 PRP1 0x0 PRP2 0x0 00:28:49.171 [2024-11-20 14:47:46.388012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.388018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.171 [2024-11-20 14:47:46.388023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.171 [2024-11-20 14:47:46.400388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95832 len:8 PRP1 0x0 PRP2 0x0 00:28:49.171 [2024-11-20 14:47:46.400398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.400406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.171 [2024-11-20 14:47:46.400412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.171 [2024-11-20 14:47:46.400418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95840 len:8 PRP1 0x0 PRP2 0x0 00:28:49.171 [2024-11-20 14:47:46.400424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.171 [2024-11-20 14:47:46.400431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.172 [2024-11-20 14:47:46.400436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.172 [2024-11-20 14:47:46.400441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95848 len:8 PRP1 0x0 PRP2 0x0 00:28:49.172 [2024-11-20 14:47:46.400450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:46.400457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.172 [2024-11-20 14:47:46.400462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.172 [2024-11-20 14:47:46.400468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95856 len:8 PRP1 0x0 PRP2 0x0 00:28:49.172 [2024-11-20 14:47:46.400474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:46.400480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.172 [2024-11-20 14:47:46.400485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.172 [2024-11-20 14:47:46.400491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95864 len:8 PRP1 0x0 PRP2 0x0 00:28:49.172 [2024-11-20 14:47:46.400497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:46.400541] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:49.172 [2024-11-20 14:47:46.400564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.172 [2024-11-20 14:47:46.400571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:46.400579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.172 [2024-11-20 14:47:46.400585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:46.400592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.172 [2024-11-20 14:47:46.400598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:46.400606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.172 [2024-11-20 14:47:46.400612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:46.400623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:49.172 [2024-11-20 14:47:46.400659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a52340 (9): Bad file descriptor 00:28:49.172 [2024-11-20 14:47:46.404419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:49.172 [2024-11-20 14:47:46.468618] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:28:49.172 10583.50 IOPS, 41.34 MiB/s [2024-11-20T13:48:01.130Z] 10751.33 IOPS, 42.00 MiB/s [2024-11-20T13:48:01.130Z] 10801.00 IOPS, 42.19 MiB/s [2024-11-20T13:48:01.130Z] [2024-11-20 14:47:49.937632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.172 [2024-11-20 14:47:49.937676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.937988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.937995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.938003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.172 [2024-11-20 14:47:49.938010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.172 [2024-11-20 14:47:49.938018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.173 [2024-11-20 14:47:49.938597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.173 [2024-11-20 14:47:49.938605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.174 [2024-11-20 14:47:49.938611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.174 [2024-11-20 14:47:49.938626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.174 [2024-11-20 14:47:49.938640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.174 [2024-11-20 14:47:49.938657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.174 [2024-11-20 14:47:49.938671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.174 [2024-11-20 14:47:49.938686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.174 [2024-11-20 14:47:49.938700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.174 [2024-11-20 14:47:49.938714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.174 [2024-11-20 14:47:49.938728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.174 [2024-11-20 14:47:49.938743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.174 [2024-11-20 14:47:49.938757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.174 [2024-11-20 14:47:49.938771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.174 [2024-11-20 14:47:49.938785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.174 [2024-11-20 14:47:49.938800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.174 [2024-11-20 14:47:49.938814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.174 [2024-11-20 14:47:49.938828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.174 [2024-11-20 14:47:49.938844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.174 [2024-11-20 14:47:49.938871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34632 len:8 PRP1 0x0 PRP2 0x0 00:28:49.174 [2024-11-20 14:47:49.938878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.174 [2024-11-20 14:47:49.938896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.174 [2024-11-20 14:47:49.938904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34640 len:8 PRP1 0x0 PRP2 0x0 00:28:49.174 [2024-11-20 14:47:49.938910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.174 [2024-11-20 14:47:49.938922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.174 [2024-11-20 14:47:49.938927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34648 len:8 PRP1 0x0 PRP2 0x0 00:28:49.174 [2024-11-20 14:47:49.938934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.174 [2024-11-20 14:47:49.938945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.174 [2024-11-20 14:47:49.938956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34656 len:8 PRP1 0x0 PRP2 0x0 00:28:49.174 [2024-11-20 14:47:49.938962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.174 [2024-11-20 14:47:49.938974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.174 [2024-11-20 14:47:49.938980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34664 len:8 PRP1 0x0 PRP2 0x0 00:28:49.174 [2024-11-20 14:47:49.938986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.938993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.174 [2024-11-20 14:47:49.938998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.174 [2024-11-20 14:47:49.939003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34672 len:8 PRP1 0x0 PRP2 0x0 00:28:49.174 [2024-11-20 14:47:49.939010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.939016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.174 [2024-11-20 14:47:49.939021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.174 [2024-11-20 14:47:49.939027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34680 len:8 PRP1 0x0 PRP2 0x0 00:28:49.174 [2024-11-20 14:47:49.939033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.939040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.174 [2024-11-20 14:47:49.939045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.174 [2024-11-20 14:47:49.939052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34688 len:8 PRP1 0x0 PRP2 0x0 00:28:49.174 [2024-11-20 14:47:49.939058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.939065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.174 [2024-11-20 14:47:49.939070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.174 [2024-11-20 14:47:49.939075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34696 len:8 PRP1 0x0 PRP2 0x0 00:28:49.174 [2024-11-20 14:47:49.939082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.939088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.174 [2024-11-20 14:47:49.939094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.174 [2024-11-20 14:47:49.939101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34704 len:8 PRP1 0x0 PRP2 0x0 00:28:49.174 [2024-11-20 14:47:49.939107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.939114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.174 [2024-11-20 14:47:49.939119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.174 [2024-11-20 14:47:49.939124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34712 len:8 PRP1 0x0 PRP2 0x0 00:28:49.174 [2024-11-20 14:47:49.939131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.939138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.174 [2024-11-20 14:47:49.939142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.174 [2024-11-20 14:47:49.939148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34720 len:8 PRP1 0x0 PRP2 0x0 00:28:49.174 [2024-11-20 14:47:49.939154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.939160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.174 [2024-11-20 14:47:49.939165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.174 [2024-11-20 14:47:49.939170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34728 len:8 PRP1 0x0 PRP2 0x0 00:28:49.174 [2024-11-20 14:47:49.939177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.939183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.174 [2024-11-20 14:47:49.939188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.174 [2024-11-20 14:47:49.939194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34736 len:8 PRP1 0x0 PRP2 0x0 00:28:49.174 [2024-11-20 14:47:49.939200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.174 [2024-11-20 14:47:49.939207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34744 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34752 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34760 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34768 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34776 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34784 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34792 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34800 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34808 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34816 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34824 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34832 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34840 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34848 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34856 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34864 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34872 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34880 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34888 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34896 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34904 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.939711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.939717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34912 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.939723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.939729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.949975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.949986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34920 len:8 PRP1 0x0 PRP2 0x0 00:28:49.175 [2024-11-20 14:47:49.949996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.175 [2024-11-20 14:47:49.950007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.175 [2024-11-20 14:47:49.950014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.175 [2024-11-20 14:47:49.950021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34928 len:8 PRP1 0x0 PRP2 0x0 00:28:49.176 [2024-11-20 14:47:49.950030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:49.950039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.176 [2024-11-20 14:47:49.950049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.176 [2024-11-20 14:47:49.950056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34936 len:8 PRP1 0x0 PRP2 0x0 00:28:49.176 [2024-11-20 14:47:49.950065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:49.950074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.176 [2024-11-20 14:47:49.950081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.176 [2024-11-20 14:47:49.950088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34944 len:8 PRP1 0x0 PRP2 0x0 00:28:49.176 [2024-11-20 14:47:49.950097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:49.950106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.176 [2024-11-20 14:47:49.950113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.176 [2024-11-20 14:47:49.950120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34952 len:8 PRP1 0x0 PRP2 0x0 00:28:49.176 [2024-11-20 14:47:49.950129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:49.950138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.176 [2024-11-20 14:47:49.950146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.176 [2024-11-20 14:47:49.950153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34960 len:8 PRP1 0x0 PRP2 0x0 00:28:49.176 [2024-11-20 14:47:49.950162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:49.950172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.176 [2024-11-20 14:47:49.950178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.176 [2024-11-20 14:47:49.950185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33952 len:8 PRP1 0x0 PRP2 0x0 00:28:49.176 [2024-11-20 14:47:49.950194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:49.950203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.176 [2024-11-20 14:47:49.950210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.176 [2024-11-20 14:47:49.950217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33960 len:8 PRP1 0x0 PRP2 0x0 00:28:49.176 [2024-11-20 14:47:49.950226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:49.950235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.176 [2024-11-20 14:47:49.950242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.176 [2024-11-20 14:47:49.950249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33968 len:8 PRP1 0x0 PRP2 0x0 00:28:49.176 [2024-11-20 14:47:49.950257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:49.950266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.176 [2024-11-20 14:47:49.950273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.176 [2024-11-20 14:47:49.950280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33976 len:8 PRP1 0x0 PRP2 0x0 00:28:49.176 [2024-11-20 14:47:49.950289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:49.950300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.176 [2024-11-20 14:47:49.950307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.176 [2024-11-20 14:47:49.950314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33984 len:8 PRP1 0x0 PRP2 0x0 00:28:49.176 [2024-11-20 14:47:49.950323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:49.950331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.176 [2024-11-20 14:47:49.950338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.176 [2024-11-20 14:47:49.950345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33992 len:8 PRP1 0x0 PRP2 0x0 00:28:49.176 [2024-11-20 14:47:49.950354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:49.950363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.176 [2024-11-20 14:47:49.950370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.176 [2024-11-20 14:47:49.950377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34000 len:8 PRP1 0x0 PRP2 0x0 00:28:49.176 [2024-11-20 14:47:49.950385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:49.950433] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:28:49.176 [2024-11-20 14:47:49.950461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.176 [2024-11-20 14:47:49.950471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:49.950481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.176 [2024-11-20 14:47:49.950490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:49.950499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.176 [2024-11-20 14:47:49.950508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:49.950518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.176 [2024-11-20 14:47:49.950527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:49.950536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:28:49.176 [2024-11-20 14:47:49.950563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a52340 (9): Bad file descriptor 00:28:49.176 [2024-11-20 14:47:49.954467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:49.176 [2024-11-20 14:47:49.980315] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:28:49.176 10761.20 IOPS, 42.04 MiB/s [2024-11-20T13:48:01.134Z] 10785.83 IOPS, 42.13 MiB/s [2024-11-20T13:48:01.134Z] 10826.86 IOPS, 42.29 MiB/s [2024-11-20T13:48:01.134Z] 10829.62 IOPS, 42.30 MiB/s [2024-11-20T13:48:01.134Z] 10857.56 IOPS, 42.41 MiB/s [2024-11-20T13:48:01.134Z] [2024-11-20 14:47:54.392897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.176 [2024-11-20 14:47:54.392933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:54.392963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.176 [2024-11-20 14:47:54.392972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:54.392982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.176 [2024-11-20 14:47:54.392989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:54.392997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.176 [2024-11-20 14:47:54.393004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:54.393013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.176 [2024-11-20 14:47:54.393020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:54.393028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.176 [2024-11-20 14:47:54.393035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:54.393043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.176 [2024-11-20 14:47:54.393049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:54.393058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.176 [2024-11-20 14:47:54.393064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:54.393072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.176 [2024-11-20 14:47:54.393079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:54.393087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.176 [2024-11-20 14:47:54.393094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:54.393103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.176 [2024-11-20 14:47:54.393110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:54.393118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.176 [2024-11-20 14:47:54.393125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:54.393133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.176 [2024-11-20 14:47:54.393140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.176 [2024-11-20 14:47:54.393148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.177 [2024-11-20 14:47:54.393676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.177 [2024-11-20 14:47:54.393686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.393990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.393998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.394004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.394012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.394020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.394028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.394035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.394043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.394049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.394057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.394064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.394072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.394078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.394086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.394095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.394103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.394109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.394117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.394123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.394131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.394138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.394146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.394152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.394161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.394167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.394175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.394181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.394190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.394196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.394204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.394211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.394219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.394225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.394233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.394239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.394247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.394254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.178 [2024-11-20 14:47:54.394262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.178 [2024-11-20 14:47:54.394268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.179 [2024-11-20 14:47:54.394284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.179 [2024-11-20 14:47:54.394299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.179 [2024-11-20 14:47:54.394313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.179 [2024-11-20 14:47:54.394327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.179 [2024-11-20 14:47:54.394342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.179 [2024-11-20 14:47:54.394357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.179 [2024-11-20 14:47:54.394372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.179 [2024-11-20 14:47:54.394386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.179 [2024-11-20 14:47:54.394400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.179 [2024-11-20 14:47:54.394415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.179 [2024-11-20 14:47:54.394429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.179 [2024-11-20 14:47:54.394444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.179 [2024-11-20 14:47:54.394458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.179 [2024-11-20 14:47:54.394475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.179 [2024-11-20 14:47:54.394489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.179 [2024-11-20 14:47:54.394503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.179 [2024-11-20 14:47:54.394532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38224 len:8 PRP1 0x0 PRP2 0x0 00:28:49.179 [2024-11-20 14:47:54.394539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.179 [2024-11-20 14:47:54.394554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.179 [2024-11-20 14:47:54.394559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38232 len:8 PRP1 0x0 PRP2 0x0 00:28:49.179 [2024-11-20 14:47:54.394565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.179 [2024-11-20 14:47:54.394578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.179 [2024-11-20 14:47:54.394583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38240 len:8 PRP1 0x0 PRP2 0x0 00:28:49.179 [2024-11-20 14:47:54.394589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.179 [2024-11-20 14:47:54.394601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.179 [2024-11-20 14:47:54.394606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38248 len:8 PRP1 0x0 PRP2 0x0 00:28:49.179 [2024-11-20 14:47:54.394613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.179 [2024-11-20 14:47:54.394625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.179 [2024-11-20 14:47:54.394631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38256 len:8 PRP1 0x0 PRP2 0x0 00:28:49.179 [2024-11-20 14:47:54.394637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.179 [2024-11-20 14:47:54.394649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.179 [2024-11-20 14:47:54.394654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38264 len:8 PRP1 0x0 PRP2 0x0 00:28:49.179 [2024-11-20 14:47:54.394660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.179 [2024-11-20 14:47:54.394674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.179 [2024-11-20 14:47:54.394679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38272 len:8 PRP1 0x0 PRP2 0x0 00:28:49.179 [2024-11-20 14:47:54.394686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.179 [2024-11-20 14:47:54.394698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.179 [2024-11-20 14:47:54.394703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38280 len:8 PRP1 0x0 PRP2 0x0 00:28:49.179 [2024-11-20 14:47:54.394709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.179 [2024-11-20 14:47:54.394720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.179 [2024-11-20 14:47:54.394726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38288 len:8 PRP1 0x0 PRP2 0x0 00:28:49.179 [2024-11-20 14:47:54.394733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.179 [2024-11-20 14:47:54.394745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.179 [2024-11-20 14:47:54.394751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38296 len:8 PRP1 0x0 PRP2 0x0 00:28:49.179 [2024-11-20 14:47:54.394757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.179 [2024-11-20 14:47:54.394769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.179 [2024-11-20 14:47:54.394774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38304 len:8 PRP1 0x0 PRP2 0x0 00:28:49.179 [2024-11-20 14:47:54.394780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.179 [2024-11-20 14:47:54.394792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.179 [2024-11-20 14:47:54.394798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38312 len:8 PRP1 0x0 PRP2 0x0 00:28:49.179 [2024-11-20 14:47:54.394804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.179 [2024-11-20 14:47:54.394822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.179 [2024-11-20 14:47:54.394827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38320 len:8 PRP1 0x0 PRP2 0x0 00:28:49.179 [2024-11-20 14:47:54.394833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.179 [2024-11-20 14:47:54.394840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.179 [2024-11-20 14:47:54.394845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.180 [2024-11-20 14:47:54.394851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38328 len:8 PRP1 0x0 PRP2 0x0 00:28:49.180 [2024-11-20 14:47:54.394859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.180 [2024-11-20 14:47:54.394865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.180 [2024-11-20 14:47:54.394870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.180 [2024-11-20 14:47:54.394876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38336 len:8 PRP1 0x0 PRP2 0x0 00:28:49.180 [2024-11-20 14:47:54.394882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.180 [2024-11-20 14:47:54.394889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.180 [2024-11-20 14:47:54.394894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.180 [2024-11-20 14:47:54.394899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38344 len:8 PRP1 0x0 PRP2 0x0 00:28:49.180 [2024-11-20 14:47:54.394906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.180 [2024-11-20 14:47:54.394912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.180 [2024-11-20 14:47:54.394918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.180 [2024-11-20 14:47:54.394923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37408 len:8 PRP1 0x0 PRP2 0x0 00:28:49.180 [2024-11-20 14:47:54.394930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.180 [2024-11-20 14:47:54.394937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.180 [2024-11-20 14:47:54.394941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.180 [2024-11-20 14:47:54.394951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37416 len:8 PRP1 0x0 PRP2 0x0 00:28:49.180 [2024-11-20 14:47:54.394958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.180 [2024-11-20 14:47:54.394965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.180 [2024-11-20 14:47:54.394970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.180 [2024-11-20 14:47:54.394975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37424 len:8 PRP1 0x0 PRP2 0x0 00:28:49.180 [2024-11-20 14:47:54.394981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.180 [2024-11-20 14:47:54.394988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.180 [2024-11-20 14:47:54.394993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.180 [2024-11-20 14:47:54.394998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37432 len:8 PRP1 0x0 PRP2 0x0 00:28:49.180 [2024-11-20 14:47:54.395005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.180 [2024-11-20 14:47:54.395012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.180 [2024-11-20 14:47:54.395018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.180 [2024-11-20 14:47:54.395023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37440 len:8 PRP1 0x0 PRP2 0x0 00:28:49.180 [2024-11-20 14:47:54.395030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.180 [2024-11-20 14:47:54.395036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.180 [2024-11-20 14:47:54.395041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.180 [2024-11-20 14:47:54.395048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37448 len:8 PRP1 0x0 PRP2 0x0 00:28:49.180 [2024-11-20 14:47:54.395054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.180 [2024-11-20 14:47:54.395061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.180 [2024-11-20 14:47:54.395066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.180 [2024-11-20 14:47:54.395072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37456 len:8 PRP1 0x0 PRP2 0x0 00:28:49.180 [2024-11-20 14:47:54.395078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.180 [2024-11-20 14:47:54.395120] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:28:49.180 [2024-11-20 14:47:54.395143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.180 [2024-11-20 14:47:54.395150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.180 [2024-11-20 14:47:54.395159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.180 [2024-11-20 14:47:54.395166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.180 [2024-11-20 14:47:54.395173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.180 [2024-11-20 14:47:54.395180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.180 [2024-11-20 14:47:54.395187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.180 [2024-11-20 14:47:54.395193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.180 [2024-11-20 14:47:54.395200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:28:49.180 [2024-11-20 14:47:54.395221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a52340 (9): Bad file descriptor 00:28:49.180 [2024-11-20 14:47:54.398053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:28:49.180 [2024-11-20 14:47:54.474254] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:28:49.180 10774.40 IOPS, 42.09 MiB/s [2024-11-20T13:48:01.138Z] 10779.36 IOPS, 42.11 MiB/s [2024-11-20T13:48:01.138Z] 10805.33 IOPS, 42.21 MiB/s [2024-11-20T13:48:01.138Z] 10813.08 IOPS, 42.24 MiB/s [2024-11-20T13:48:01.138Z] 10828.21 IOPS, 42.30 MiB/s 00:28:49.180 Latency(us) 00:28:49.180 [2024-11-20T13:48:01.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.180 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:49.180 Verification LBA range: start 0x0 length 0x4000 00:28:49.180 NVMe0n1 : 15.00 10835.60 42.33 539.97 0.00 11229.46 429.19 23478.98 00:28:49.180 [2024-11-20T13:48:01.138Z] =================================================================================================================== 00:28:49.180 [2024-11-20T13:48:01.138Z] Total : 10835.60 42.33 539.97 0.00 11229.46 429.19 23478.98 00:28:49.180 Received shutdown signal, test time was about 15.000000 seconds 00:28:49.180 00:28:49.180 Latency(us) 00:28:49.180 [2024-11-20T13:48:01.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.180 [2024-11-20T13:48:01.138Z] =================================================================================================================== 00:28:49.180 [2024-11-20T13:48:01.138Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:49.180 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:49.180 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:28:49.180 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:28:49.180 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1695320 00:28:49.180 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:49.180 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1695320 /var/tmp/bdevperf.sock 00:28:49.180 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1695320 ']' 00:28:49.180 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:49.180 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:49.180 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:49.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:49.180 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:49.180 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:49.180 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.180 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:28:49.180 14:48:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:49.180 [2024-11-20 14:48:01.018619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:49.180 14:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:49.438 [2024-11-20 14:48:01.227251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:49.438 14:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:49.696 NVMe0n1 00:28:49.696 14:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:50.260 00:28:50.260 14:48:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:50.260 00:28:50.260 14:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:50.260 14:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:28:50.517 14:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:50.778 14:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:28:54.060 14:48:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:54.060 14:48:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:28:54.060 14:48:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1696313 00:28:54.060 14:48:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:54.060 14:48:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1696313 00:28:54.993 { 00:28:54.993 "results": [ 00:28:54.993 { 00:28:54.993 "job": "NVMe0n1", 00:28:54.994 "core_mask": "0x1", 00:28:54.994 "workload": "verify", 00:28:54.994 "status": "finished", 00:28:54.994 "verify_range": { 00:28:54.994 "start": 0, 00:28:54.994 "length": 16384 00:28:54.994 }, 00:28:54.994 "queue_depth": 128, 00:28:54.994 "io_size": 4096, 00:28:54.994 "runtime": 1.014187, 00:28:54.994 "iops": 10945.713167295578, 00:28:54.994 "mibps": 42.75669205974835, 00:28:54.994 "io_failed": 0, 00:28:54.994 "io_timeout": 0, 00:28:54.994 "avg_latency_us": 11648.518150264566, 00:28:54.994 "min_latency_us": 2179.784347826087, 00:28:54.994 "max_latency_us": 10143.83304347826 00:28:54.994 } 00:28:54.994 ], 00:28:54.994 "core_count": 1 00:28:54.994 } 00:28:54.994 14:48:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:54.994 [2024-11-20 14:48:00.625080] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:28:54.994 [2024-11-20 14:48:00.625136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1695320 ] 00:28:54.994 [2024-11-20 14:48:00.700957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.994 [2024-11-20 14:48:00.739203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.994 [2024-11-20 14:48:02.581304] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:54.994 [2024-11-20 14:48:02.581350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.994 [2024-11-20 14:48:02.581361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.994 [2024-11-20 14:48:02.581370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.994 [2024-11-20 14:48:02.581376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.994 [2024-11-20 14:48:02.581384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.994 [2024-11-20 14:48:02.581390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.994 [2024-11-20 14:48:02.581397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.994 [2024-11-20 14:48:02.581403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.994 [2024-11-20 14:48:02.581410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:28:54.994 [2024-11-20 14:48:02.581433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:28:54.994 [2024-11-20 14:48:02.581447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa1340 (9): Bad file descriptor 00:28:54.994 [2024-11-20 14:48:02.592180] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:28:54.994 Running I/O for 1 seconds... 00:28:54.994 10869.00 IOPS, 42.46 MiB/s 00:28:54.994 Latency(us) 00:28:54.994 [2024-11-20T13:48:06.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.994 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:54.994 Verification LBA range: start 0x0 length 0x4000 00:28:54.994 NVMe0n1 : 1.01 10945.71 42.76 0.00 0.00 11648.52 2179.78 10143.83 00:28:54.994 [2024-11-20T13:48:06.952Z] =================================================================================================================== 00:28:54.994 [2024-11-20T13:48:06.952Z] Total : 10945.71 42.76 0.00 0.00 11648.52 2179.78 10143.83 00:28:54.994 14:48:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:55.251 14:48:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:28:55.251 14:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:55.508 14:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:55.508 14:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:28:55.766 14:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:56.023 14:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:28:59.315 14:48:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:59.315 14:48:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:28:59.315 14:48:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1695320 00:28:59.315 14:48:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1695320 ']' 00:28:59.315 14:48:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1695320 00:28:59.315 14:48:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:28:59.315 14:48:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.315 14:48:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1695320 00:28:59.315 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:59.315 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:59.315 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1695320' 00:28:59.315 killing process with pid 1695320 00:28:59.315 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1695320 00:28:59.315 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1695320 00:28:59.315 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:28:59.315 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:59.572 rmmod nvme_tcp 00:28:59.572 rmmod nvme_fabrics 00:28:59.572 rmmod nvme_keyring 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1692454 ']' 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1692454 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1692454 ']' 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1692454 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1692454 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1692454' 00:28:59.572 killing process with pid 1692454 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1692454 00:28:59.572 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1692454 00:28:59.831 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:59.831 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:59.831 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:59.831 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:28:59.831 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:28:59.831 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:59.831 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:28:59.831 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:59.831 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:59.831 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.831 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.831 14:48:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:02.367 00:29:02.367 real 0m37.127s 00:29:02.367 user 1m57.527s 00:29:02.367 sys 0m7.962s 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:02.367 ************************************ 00:29:02.367 END TEST nvmf_failover 00:29:02.367 ************************************ 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.367 ************************************ 00:29:02.367 START TEST nvmf_host_discovery 00:29:02.367 ************************************ 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:02.367 * Looking for test storage... 00:29:02.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:02.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.367 --rc genhtml_branch_coverage=1 00:29:02.367 --rc genhtml_function_coverage=1 00:29:02.367 --rc genhtml_legend=1 00:29:02.367 --rc geninfo_all_blocks=1 00:29:02.367 --rc geninfo_unexecuted_blocks=1 00:29:02.367 00:29:02.367 ' 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:02.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.367 --rc genhtml_branch_coverage=1 00:29:02.367 --rc genhtml_function_coverage=1 00:29:02.367 --rc genhtml_legend=1 00:29:02.367 --rc geninfo_all_blocks=1 00:29:02.367 --rc geninfo_unexecuted_blocks=1 00:29:02.367 00:29:02.367 ' 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:02.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.367 --rc genhtml_branch_coverage=1 00:29:02.367 --rc genhtml_function_coverage=1 00:29:02.367 --rc genhtml_legend=1 00:29:02.367 --rc geninfo_all_blocks=1 00:29:02.367 --rc geninfo_unexecuted_blocks=1 00:29:02.367 00:29:02.367 ' 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:02.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.367 --rc genhtml_branch_coverage=1 00:29:02.367 --rc genhtml_function_coverage=1 00:29:02.367 --rc genhtml_legend=1 00:29:02.367 --rc geninfo_all_blocks=1 00:29:02.367 --rc geninfo_unexecuted_blocks=1 00:29:02.367 00:29:02.367 ' 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.367 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:29:02.367 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.367 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.367 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:02.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:29:02.368 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.938 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:08.938 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:29:08.938 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:08.938 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:08.938 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:08.938 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:08.938 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:08.938 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:29:08.938 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:08.938 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:29:08.938 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:29:08.938 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:29:08.938 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:29:08.938 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:29:08.938 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:29:08.938 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.938 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.938 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:08.939 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:08.939 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:08.939 Found net devices under 0000:86:00.0: cvl_0_0 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:08.939 Found net devices under 0000:86:00.1: cvl_0_1 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:08.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:29:08.939 00:29:08.939 --- 10.0.0.2 ping statistics --- 00:29:08.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.939 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:08.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:29:08.939 00:29:08.939 --- 10.0.0.1 ping statistics --- 00:29:08.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.939 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1701080 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1701080 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1701080 ']' 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.939 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:08.940 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.940 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:08.940 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.940 [2024-11-20 14:48:19.978831] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:29:08.940 [2024-11-20 14:48:19.978881] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.940 [2024-11-20 14:48:20.060131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.940 [2024-11-20 14:48:20.104956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.940 [2024-11-20 14:48:20.104994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.940 [2024-11-20 14:48:20.105001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:08.940 [2024-11-20 14:48:20.105007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:08.940 [2024-11-20 14:48:20.105015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.940 [2024-11-20 14:48:20.105567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.940 [2024-11-20 14:48:20.242027] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.940 [2024-11-20 14:48:20.254211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.940 null0 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.940 null1 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1701135 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1701135 /tmp/host.sock 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1701135 ']' 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:08.940 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.940 [2024-11-20 14:48:20.331002] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:29:08.940 [2024-11-20 14:48:20.331045] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1701135 ] 00:29:08.940 [2024-11-20 14:48:20.404287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.940 [2024-11-20 14:48:20.447108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.940 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.941 [2024-11-20 14:48:20.843695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:08.941 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.200 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:09.201 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:09.201 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:09.201 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:09.201 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:09.201 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:29:09.201 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:09.201 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:09.201 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.201 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:09.201 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:09.201 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:09.201 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.201 14:48:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:29:09.201 14:48:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:29:09.768 [2024-11-20 14:48:21.610452] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:09.768 [2024-11-20 14:48:21.610470] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:09.768 [2024-11-20 14:48:21.610482] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:10.027 [2024-11-20 14:48:21.738885] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:10.027 [2024-11-20 14:48:21.799447] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:29:10.027 [2024-11-20 14:48:21.800212] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1145df0:1 started. 00:29:10.027 [2024-11-20 14:48:21.801612] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:10.027 [2024-11-20 14:48:21.801629] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:10.027 [2024-11-20 14:48:21.808795] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1145df0 was disconnected and freed. delete nvme_qpair. 00:29:10.285 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:10.285 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:10.285 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:29:10.285 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:10.285 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:10.285 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.286 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.545 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.545 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:10.545 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:10.545 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:10.545 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:10.545 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:10.545 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:29:10.545 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:10.545 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:10.545 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.545 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:10.545 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.545 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:10.545 [2024-11-20 14:48:22.490538] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1114620:1 started. 00:29:10.545 [2024-11-20 14:48:22.500433] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1114620 was disconnected and freed. delete nvme_qpair. 00:29:10.804 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.804 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.805 [2024-11-20 14:48:22.576395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:10.805 [2024-11-20 14:48:22.576833] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:10.805 [2024-11-20 14:48:22.576852] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.805 [2024-11-20 14:48:22.703573] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:29:10.805 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:29:11.064 [2024-11-20 14:48:23.008852] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:29:11.064 [2024-11-20 14:48:23.008887] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:11.064 [2024-11-20 14:48:23.008895] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:11.064 [2024-11-20 14:48:23.008899] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:12.003 [2024-11-20 14:48:23.836222] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:12.003 [2024-11-20 14:48:23.836243] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:29:12.003 [2024-11-20 14:48:23.844952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.003 [2024-11-20 14:48:23.844973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.003 [2024-11-20 14:48:23.844982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.003 [2024-11-20 14:48:23.844990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.003 [2024-11-20 14:48:23.844997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.003 [2024-11-20 14:48:23.845004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.003 [2024-11-20 14:48:23.845011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.003 [2024-11-20 14:48:23.845018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.003 [2024-11-20 14:48:23.845024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1116390 is same with the state(6) to be set 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:12.003 [2024-11-20 14:48:23.854960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1116390 (9): Bad file descriptor 00:29:12.003 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.003 [2024-11-20 14:48:23.864994] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:12.003 [2024-11-20 14:48:23.865010] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:12.003 [2024-11-20 14:48:23.865015] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:12.004 [2024-11-20 14:48:23.865019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:12.004 [2024-11-20 14:48:23.865035] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:12.004 [2024-11-20 14:48:23.865279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-11-20 14:48:23.865293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1116390 with addr=10.0.0.2, port=4420 00:29:12.004 [2024-11-20 14:48:23.865302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1116390 is same with the state(6) to be set 00:29:12.004 [2024-11-20 14:48:23.865313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1116390 (9): Bad file descriptor 00:29:12.004 [2024-11-20 14:48:23.865323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:12.004 [2024-11-20 14:48:23.865330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:12.004 [2024-11-20 14:48:23.865337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:12.004 [2024-11-20 14:48:23.865343] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:12.004 [2024-11-20 14:48:23.865348] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:12.004 [2024-11-20 14:48:23.865352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:12.004 [2024-11-20 14:48:23.875066] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:12.004 [2024-11-20 14:48:23.875077] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:12.004 [2024-11-20 14:48:23.875081] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:12.004 [2024-11-20 14:48:23.875085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:12.004 [2024-11-20 14:48:23.875098] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:12.004 [2024-11-20 14:48:23.875366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-11-20 14:48:23.875378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1116390 with addr=10.0.0.2, port=4420 00:29:12.004 [2024-11-20 14:48:23.875385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1116390 is same with the state(6) to be set 00:29:12.004 [2024-11-20 14:48:23.875396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1116390 (9): Bad file descriptor 00:29:12.004 [2024-11-20 14:48:23.875412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:12.004 [2024-11-20 14:48:23.875418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:12.004 [2024-11-20 14:48:23.875425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:12.004 [2024-11-20 14:48:23.875431] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:12.004 [2024-11-20 14:48:23.875435] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:12.004 [2024-11-20 14:48:23.875439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:12.004 [2024-11-20 14:48:23.885130] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:12.004 [2024-11-20 14:48:23.885146] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:12.004 [2024-11-20 14:48:23.885150] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:12.004 [2024-11-20 14:48:23.885156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:12.004 [2024-11-20 14:48:23.885172] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:12.004 [2024-11-20 14:48:23.885295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-11-20 14:48:23.885307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1116390 with addr=10.0.0.2, port=4420 00:29:12.004 [2024-11-20 14:48:23.885314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1116390 is same with the state(6) to be set 00:29:12.004 [2024-11-20 14:48:23.885325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1116390 (9): Bad file descriptor 00:29:12.004 [2024-11-20 14:48:23.885335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:12.004 [2024-11-20 14:48:23.885341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:12.004 [2024-11-20 14:48:23.885348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:12.004 [2024-11-20 14:48:23.885355] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:12.004 [2024-11-20 14:48:23.885360] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:12.004 [2024-11-20 14:48:23.885364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:12.004 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.004 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:12.004 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:12.004 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:12.004 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:12.004 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:12.004 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:12.004 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:29:12.004 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:12.004 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:12.004 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.004 [2024-11-20 14:48:23.895203] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:12.004 [2024-11-20 14:48:23.895217] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:12.004 [2024-11-20 14:48:23.895221] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:12.004 [2024-11-20 14:48:23.895225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:12.004 [2024-11-20 14:48:23.895238] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:12.004 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:12.004 [2024-11-20 14:48:23.895356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-11-20 14:48:23.895369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1116390 with addr=10.0.0.2, port=4420 00:29:12.004 [2024-11-20 14:48:23.895377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1116390 is same with the state(6) to be set 00:29:12.004 [2024-11-20 14:48:23.895388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1116390 (9): Bad file descriptor 00:29:12.004 [2024-11-20 14:48:23.895398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:12.004 [2024-11-20 14:48:23.895404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:12.004 [2024-11-20 14:48:23.895411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:12.004 [2024-11-20 14:48:23.895417] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:12.004 [2024-11-20 14:48:23.895422] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:12.004 [2024-11-20 14:48:23.895429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:12.004 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:12.004 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:12.004 [2024-11-20 14:48:23.905269] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:12.004 [2024-11-20 14:48:23.905284] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:12.004 [2024-11-20 14:48:23.905289] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:12.004 [2024-11-20 14:48:23.905293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:12.004 [2024-11-20 14:48:23.905309] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:12.004 [2024-11-20 14:48:23.905464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-11-20 14:48:23.905475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1116390 with addr=10.0.0.2, port=4420 00:29:12.005 [2024-11-20 14:48:23.905483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1116390 is same with the state(6) to be set 00:29:12.005 [2024-11-20 14:48:23.905494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1116390 (9): Bad file descriptor 00:29:12.005 [2024-11-20 14:48:23.905506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:12.005 [2024-11-20 14:48:23.905514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:12.005 [2024-11-20 14:48:23.905522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:12.005 [2024-11-20 14:48:23.905528] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:12.005 [2024-11-20 14:48:23.905533] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:12.005 [2024-11-20 14:48:23.905538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:12.005 [2024-11-20 14:48:23.915340] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:12.005 [2024-11-20 14:48:23.915351] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:12.005 [2024-11-20 14:48:23.915355] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:12.005 [2024-11-20 14:48:23.915363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:12.005 [2024-11-20 14:48:23.915376] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:12.005 [2024-11-20 14:48:23.915606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-11-20 14:48:23.915618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1116390 with addr=10.0.0.2, port=4420 00:29:12.005 [2024-11-20 14:48:23.915626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1116390 is same with the state(6) to be set 00:29:12.005 [2024-11-20 14:48:23.915636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1116390 (9): Bad file descriptor 00:29:12.005 [2024-11-20 14:48:23.915651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:12.005 [2024-11-20 14:48:23.915657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:12.005 [2024-11-20 14:48:23.915664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:12.005 [2024-11-20 14:48:23.915669] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:12.005 [2024-11-20 14:48:23.915674] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:12.005 [2024-11-20 14:48:23.915678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:12.005 [2024-11-20 14:48:23.925407] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:12.005 [2024-11-20 14:48:23.925419] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:12.005 [2024-11-20 14:48:23.925424] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:12.005 [2024-11-20 14:48:23.925428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:12.005 [2024-11-20 14:48:23.925441] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:12.005 [2024-11-20 14:48:23.925666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-11-20 14:48:23.925678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1116390 with addr=10.0.0.2, port=4420 00:29:12.005 [2024-11-20 14:48:23.925685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1116390 is same with the state(6) to be set 00:29:12.005 [2024-11-20 14:48:23.925696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1116390 (9): Bad file descriptor 00:29:12.005 [2024-11-20 14:48:23.925706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:12.005 [2024-11-20 14:48:23.925712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:12.005 [2024-11-20 14:48:23.925719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:12.005 [2024-11-20 14:48:23.925725] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:12.005 [2024-11-20 14:48:23.925729] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:12.005 [2024-11-20 14:48:23.925733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:12.005 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.005 [2024-11-20 14:48:23.935473] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:12.005 [2024-11-20 14:48:23.935492] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:12.005 [2024-11-20 14:48:23.935496] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:12.005 [2024-11-20 14:48:23.935500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:12.005 [2024-11-20 14:48:23.935513] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:12.005 [2024-11-20 14:48:23.935662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-11-20 14:48:23.935673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1116390 with addr=10.0.0.2, port=4420 00:29:12.005 [2024-11-20 14:48:23.935680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1116390 is same with the state(6) to be set 00:29:12.005 [2024-11-20 14:48:23.935690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1116390 (9): Bad file descriptor 00:29:12.005 [2024-11-20 14:48:23.935705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:12.005 [2024-11-20 14:48:23.935712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:12.005 [2024-11-20 14:48:23.935718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:12.005 [2024-11-20 14:48:23.935724] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:12.005 [2024-11-20 14:48:23.935728] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:12.005 [2024-11-20 14:48:23.935732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:12.005 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:12.005 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:12.005 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:12.005 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:12.005 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:12.005 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:12.005 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:29:12.005 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:29:12.005 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:12.005 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:12.005 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:12.005 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.005 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:12.005 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:12.005 [2024-11-20 14:48:23.945544] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:12.005 [2024-11-20 14:48:23.945556] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:12.005 [2024-11-20 14:48:23.945560] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:12.005 [2024-11-20 14:48:23.945568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:12.005 [2024-11-20 14:48:23.945581] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:12.005 [2024-11-20 14:48:23.945685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-11-20 14:48:23.945696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1116390 with addr=10.0.0.2, port=4420 00:29:12.005 [2024-11-20 14:48:23.945704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1116390 is same with the state(6) to be set 00:29:12.005 [2024-11-20 14:48:23.945714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1116390 (9): Bad file descriptor 00:29:12.005 [2024-11-20 14:48:23.945723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:12.005 [2024-11-20 14:48:23.945729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:12.005 [2024-11-20 14:48:23.945736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:12.005 [2024-11-20 14:48:23.945741] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:12.005 [2024-11-20 14:48:23.945745] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:12.005 [2024-11-20 14:48:23.945749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:12.005 [2024-11-20 14:48:23.955612] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:12.005 [2024-11-20 14:48:23.955624] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:12.005 [2024-11-20 14:48:23.955628] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:12.005 [2024-11-20 14:48:23.955632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:12.005 [2024-11-20 14:48:23.955645] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:12.005 [2024-11-20 14:48:23.955715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-11-20 14:48:23.955726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1116390 with addr=10.0.0.2, port=4420 00:29:12.006 [2024-11-20 14:48:23.955733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1116390 is same with the state(6) to be set 00:29:12.006 [2024-11-20 14:48:23.955744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1116390 (9): Bad file descriptor 00:29:12.006 [2024-11-20 14:48:23.955759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:12.006 [2024-11-20 14:48:23.955766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:12.006 [2024-11-20 14:48:23.955773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:12.006 [2024-11-20 14:48:23.955778] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:12.006 [2024-11-20 14:48:23.955782] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:12.006 [2024-11-20 14:48:23.955786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:12.006 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.264 [2024-11-20 14:48:23.963330] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:12.264 [2024-11-20 14:48:23.963346] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:12.264 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:29:12.264 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:29:13.201 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:13.201 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:29:13.201 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:29:13.201 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:13.201 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:13.201 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.201 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:13.201 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:13.201 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.201 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:13.202 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:13.202 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:13.202 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.202 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:29:13.202 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:13.202 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:29:13.202 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:29:13.202 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:13.202 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:13.202 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:29:13.202 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:29:13.202 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:13.202 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:13.202 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.202 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:13.202 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:13.202 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:13.461 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.461 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:29:13.461 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:13.461 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:29:13.461 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:29:13.461 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:13.461 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:13.461 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:13.461 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:13.461 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:13.461 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:29:13.461 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:13.461 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.461 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:13.461 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:13.462 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.462 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:29:13.462 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:29:13.462 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:29:13.462 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:13.462 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:13.462 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.462 14:48:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:14.398 [2024-11-20 14:48:26.302451] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:14.398 [2024-11-20 14:48:26.302466] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:14.398 [2024-11-20 14:48:26.302477] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:14.657 [2024-11-20 14:48:26.428871] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:14.916 [2024-11-20 14:48:26.729230] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:29:14.916 [2024-11-20 14:48:26.729844] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x127df40:1 started. 00:29:14.916 [2024-11-20 14:48:26.731451] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:14.916 [2024-11-20 14:48:26.731476] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.916 [2024-11-20 14:48:26.732648] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x127df40 was disconnected and freed. delete nvme_qpair. 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:14.916 request: 00:29:14.916 { 00:29:14.916 "name": "nvme", 00:29:14.916 "trtype": "tcp", 00:29:14.916 "traddr": "10.0.0.2", 00:29:14.916 "adrfam": "ipv4", 00:29:14.916 "trsvcid": "8009", 00:29:14.916 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:14.916 "wait_for_attach": true, 00:29:14.916 "method": "bdev_nvme_start_discovery", 00:29:14.916 "req_id": 1 00:29:14.916 } 00:29:14.916 Got JSON-RPC error response 00:29:14.916 response: 00:29:14.916 { 00:29:14.916 "code": -17, 00:29:14.916 "message": "File exists" 00:29:14.916 } 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:14.916 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:14.917 request: 00:29:14.917 { 00:29:14.917 "name": "nvme_second", 00:29:14.917 "trtype": "tcp", 00:29:14.917 "traddr": "10.0.0.2", 00:29:14.917 "adrfam": "ipv4", 00:29:14.917 "trsvcid": "8009", 00:29:14.917 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:14.917 "wait_for_attach": true, 00:29:14.917 "method": "bdev_nvme_start_discovery", 00:29:14.917 "req_id": 1 00:29:14.917 } 00:29:14.917 Got JSON-RPC error response 00:29:14.917 response: 00:29:14.917 { 00:29:14.917 "code": -17, 00:29:14.917 "message": "File exists" 00:29:14.917 } 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:14.917 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.176 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:16.113 [2024-11-20 14:48:27.974900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.113 [2024-11-20 14:48:27.974931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x112fe40 with addr=10.0.0.2, port=8010 00:29:16.113 [2024-11-20 14:48:27.974945] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:16.113 [2024-11-20 14:48:27.974957] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:16.113 [2024-11-20 14:48:27.974963] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:17.049 [2024-11-20 14:48:28.977346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.049 [2024-11-20 14:48:28.977371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x112fe40 with addr=10.0.0.2, port=8010 00:29:17.049 [2024-11-20 14:48:28.977383] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:17.049 [2024-11-20 14:48:28.977389] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:17.049 [2024-11-20 14:48:28.977395] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:18.427 [2024-11-20 14:48:29.979537] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:18.427 request: 00:29:18.427 { 00:29:18.427 "name": "nvme_second", 00:29:18.427 "trtype": "tcp", 00:29:18.427 "traddr": "10.0.0.2", 00:29:18.427 "adrfam": "ipv4", 00:29:18.427 "trsvcid": "8010", 00:29:18.427 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:18.427 "wait_for_attach": false, 00:29:18.427 "attach_timeout_ms": 3000, 00:29:18.427 "method": "bdev_nvme_start_discovery", 00:29:18.427 "req_id": 1 00:29:18.427 } 00:29:18.427 Got JSON-RPC error response 00:29:18.427 response: 00:29:18.427 { 00:29:18.427 "code": -110, 00:29:18.427 "message": "Connection timed out" 00:29:18.427 } 00:29:18.427 14:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:18.427 14:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:29:18.427 14:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:18.427 14:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:18.427 14:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:18.427 14:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:29:18.427 14:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:18.427 14:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:18.427 14:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.427 14:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:18.427 14:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:18.427 14:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:18.427 14:48:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1701135 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:18.427 rmmod nvme_tcp 00:29:18.427 rmmod nvme_fabrics 00:29:18.427 rmmod nvme_keyring 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1701080 ']' 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1701080 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1701080 ']' 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1701080 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1701080 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1701080' 00:29:18.427 killing process with pid 1701080 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1701080 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1701080 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:18.427 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:29:18.428 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:18.428 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:18.428 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.428 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.428 14:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:20.965 00:29:20.965 real 0m18.572s 00:29:20.965 user 0m23.198s 00:29:20.965 sys 0m5.966s 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:20.965 ************************************ 00:29:20.965 END TEST nvmf_host_discovery 00:29:20.965 ************************************ 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.965 ************************************ 00:29:20.965 START TEST nvmf_host_multipath_status 00:29:20.965 ************************************ 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:29:20.965 * Looking for test storage... 00:29:20.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:20.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.965 --rc genhtml_branch_coverage=1 00:29:20.965 --rc genhtml_function_coverage=1 00:29:20.965 --rc genhtml_legend=1 00:29:20.965 --rc geninfo_all_blocks=1 00:29:20.965 --rc geninfo_unexecuted_blocks=1 00:29:20.965 00:29:20.965 ' 00:29:20.965 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:20.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.965 --rc genhtml_branch_coverage=1 00:29:20.965 --rc genhtml_function_coverage=1 00:29:20.965 --rc genhtml_legend=1 00:29:20.966 --rc geninfo_all_blocks=1 00:29:20.966 --rc geninfo_unexecuted_blocks=1 00:29:20.966 00:29:20.966 ' 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:20.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.966 --rc genhtml_branch_coverage=1 00:29:20.966 --rc genhtml_function_coverage=1 00:29:20.966 --rc genhtml_legend=1 00:29:20.966 --rc geninfo_all_blocks=1 00:29:20.966 --rc geninfo_unexecuted_blocks=1 00:29:20.966 00:29:20.966 ' 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:20.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.966 --rc genhtml_branch_coverage=1 00:29:20.966 --rc genhtml_function_coverage=1 00:29:20.966 --rc genhtml_legend=1 00:29:20.966 --rc geninfo_all_blocks=1 00:29:20.966 --rc geninfo_unexecuted_blocks=1 00:29:20.966 00:29:20.966 ' 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:20.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:29:20.966 14:48:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:27.536 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:27.536 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:27.536 Found net devices under 0000:86:00.0: cvl_0_0 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:27.536 Found net devices under 0000:86:00.1: cvl_0_1 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:27.536 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:27.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:27.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:29:27.537 00:29:27.537 --- 10.0.0.2 ping statistics --- 00:29:27.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.537 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:27.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:27.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:29:27.537 00:29:27.537 --- 10.0.0.1 ping statistics --- 00:29:27.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.537 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1706354 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1706354 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1706354 ']' 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:27.537 [2024-11-20 14:48:38.621019] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:29:27.537 [2024-11-20 14:48:38.621065] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:27.537 [2024-11-20 14:48:38.699267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:27.537 [2024-11-20 14:48:38.741037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:27.537 [2024-11-20 14:48:38.741075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:27.537 [2024-11-20 14:48:38.741082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:27.537 [2024-11-20 14:48:38.741089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:27.537 [2024-11-20 14:48:38.741095] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:27.537 [2024-11-20 14:48:38.742323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.537 [2024-11-20 14:48:38.742325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1706354 00:29:27.537 14:48:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:27.537 [2024-11-20 14:48:39.047527] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.537 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:27.537 Malloc0 00:29:27.537 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:29:27.794 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:27.794 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:28.052 [2024-11-20 14:48:39.855591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.052 14:48:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:28.311 [2024-11-20 14:48:40.064186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:28.311 14:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:29:28.311 14:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1706636 00:29:28.311 14:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:28.311 14:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1706636 /var/tmp/bdevperf.sock 00:29:28.311 14:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1706636 ']' 00:29:28.311 14:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:28.311 14:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.311 14:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:28.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:28.311 14:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.311 14:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:28.570 14:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.570 14:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:29:28.570 14:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:28.828 14:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:29.087 Nvme0n1 00:29:29.087 14:48:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:29.654 Nvme0n1 00:29:29.654 14:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:29:29.654 14:48:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:29:31.558 14:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:29:31.558 14:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:29:31.816 14:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:32.101 14:48:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:29:33.108 14:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:29:33.108 14:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:33.108 14:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:33.108 14:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:33.108 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:33.108 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:33.108 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:33.108 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:33.366 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:33.366 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:33.366 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:33.366 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:33.624 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:33.624 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:33.624 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:33.624 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:33.882 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:33.882 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:33.882 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:33.882 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:34.140 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:34.140 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:34.140 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:34.140 14:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:34.398 14:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:34.398 14:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:29:34.398 14:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:34.657 14:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:34.657 14:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:29:36.031 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:29:36.031 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:36.031 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:36.031 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:36.031 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:36.031 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:36.031 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:36.031 14:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:36.290 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:36.290 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:36.290 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:36.290 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:36.290 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:36.290 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:36.290 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:36.290 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:36.548 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:36.548 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:36.548 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:36.548 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:36.806 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:36.806 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:36.806 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:36.806 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:37.064 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:37.064 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:29:37.064 14:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:37.323 14:48:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:29:37.323 14:48:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:29:38.696 14:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:29:38.696 14:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:38.696 14:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:38.696 14:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:38.696 14:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:38.696 14:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:38.696 14:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:38.696 14:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:38.954 14:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:38.954 14:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:38.954 14:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:38.954 14:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:38.954 14:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:38.954 14:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:38.954 14:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:38.954 14:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:39.212 14:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:39.212 14:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:39.212 14:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:39.212 14:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:39.471 14:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:39.471 14:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:39.471 14:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:39.471 14:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:39.729 14:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:39.729 14:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:29:39.729 14:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:39.987 14:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:39.987 14:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:29:41.359 14:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:29:41.359 14:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:41.359 14:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:41.359 14:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:41.359 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:41.359 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:41.359 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:41.359 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:41.617 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:41.618 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:41.618 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:41.618 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:41.875 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:41.875 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:41.875 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:41.875 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:41.875 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:41.875 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:41.875 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:41.875 14:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:42.133 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:42.133 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:42.133 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:42.133 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:42.391 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:42.391 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:29:42.391 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:42.649 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:42.907 14:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:29:43.839 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:29:43.839 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:43.839 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:43.839 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:44.097 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:44.097 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:44.097 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:44.097 14:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:44.355 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:44.355 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:44.355 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:44.355 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:44.355 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:44.355 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:44.355 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:44.355 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:44.613 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:44.613 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:29:44.613 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:44.613 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:44.871 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:44.871 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:44.871 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:44.871 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:45.129 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:45.129 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:29:45.129 14:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:45.129 14:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:45.387 14:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:29:46.320 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:29:46.320 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:46.320 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:46.320 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:46.579 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:46.579 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:46.579 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:46.579 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:46.836 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:46.836 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:46.836 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:46.836 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:47.094 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:47.094 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:47.094 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:47.094 14:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:47.361 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:47.361 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:29:47.361 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:47.361 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:47.634 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:47.634 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:47.634 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:47.634 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:47.634 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:47.634 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:29:47.893 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:29:47.893 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:29:48.151 14:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:48.409 14:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:29:49.344 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:29:49.344 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:49.344 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:49.344 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:49.603 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:49.603 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:49.603 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:49.603 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:49.861 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:49.861 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:49.861 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:49.861 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:49.861 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:49.861 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:49.861 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:49.861 14:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:50.119 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:50.119 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:50.119 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:50.119 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:50.378 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:50.378 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:50.378 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:50.378 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:50.636 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:50.636 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:29:50.636 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:50.895 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:51.153 14:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:29:52.087 14:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:29:52.087 14:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:52.087 14:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:52.087 14:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:52.345 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:52.345 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:52.345 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:52.345 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:52.345 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:52.345 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:52.345 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:52.603 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:52.603 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:52.603 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:52.603 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:52.603 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:52.862 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:52.862 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:52.862 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:52.862 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:53.120 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:53.120 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:53.120 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:53.120 14:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:53.378 14:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:53.378 14:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:29:53.378 14:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:53.635 14:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:29:53.636 14:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:29:55.010 14:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:29:55.010 14:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:55.010 14:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:55.010 14:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:55.010 14:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:55.010 14:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:55.010 14:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:55.010 14:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:55.269 14:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:55.269 14:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:55.269 14:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:55.269 14:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:55.269 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:55.269 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:55.269 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:55.269 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:55.526 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:55.526 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:55.526 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:55.526 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:55.784 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:55.784 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:55.784 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:55.784 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:56.042 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:56.042 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:29:56.042 14:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:56.301 14:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:56.559 14:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:29:57.491 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:29:57.491 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:57.491 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:57.491 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:57.749 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:57.749 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:57.749 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:57.749 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:57.749 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:57.749 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:57.749 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:57.749 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:58.008 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:58.008 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:58.008 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:58.008 14:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:58.266 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:58.266 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:58.266 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:58.266 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:58.524 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:58.524 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:58.524 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:58.524 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:58.782 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:58.782 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1706636 00:29:58.782 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1706636 ']' 00:29:58.782 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1706636 00:29:58.782 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:29:58.782 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:58.782 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1706636 00:29:58.782 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:29:58.782 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:29:58.782 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1706636' 00:29:58.782 killing process with pid 1706636 00:29:58.782 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1706636 00:29:58.782 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1706636 00:29:58.782 { 00:29:58.782 "results": [ 00:29:58.782 { 00:29:58.782 "job": "Nvme0n1", 00:29:58.782 "core_mask": "0x4", 00:29:58.782 "workload": "verify", 00:29:58.782 "status": "terminated", 00:29:58.782 "verify_range": { 00:29:58.782 "start": 0, 00:29:58.782 "length": 16384 00:29:58.782 }, 00:29:58.782 "queue_depth": 128, 00:29:58.782 "io_size": 4096, 00:29:58.782 "runtime": 29.053723, 00:29:58.782 "iops": 10344.973688914153, 00:29:58.782 "mibps": 40.41005347232091, 00:29:58.782 "io_failed": 0, 00:29:58.782 "io_timeout": 0, 00:29:58.782 "avg_latency_us": 12353.37887630047, 00:29:58.782 "min_latency_us": 302.74782608695654, 00:29:58.782 "max_latency_us": 3019898.88 00:29:58.782 } 00:29:58.782 ], 00:29:58.782 "core_count": 1 00:29:58.782 } 00:29:59.119 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1706636 00:29:59.119 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:59.119 [2024-11-20 14:48:40.130908] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:29:59.119 [2024-11-20 14:48:40.130976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706636 ] 00:29:59.119 [2024-11-20 14:48:40.204741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.119 [2024-11-20 14:48:40.245902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:59.119 Running I/O for 90 seconds... 00:29:59.119 11092.00 IOPS, 43.33 MiB/s [2024-11-20T13:49:11.077Z] 11163.00 IOPS, 43.61 MiB/s [2024-11-20T13:49:11.077Z] 11128.33 IOPS, 43.47 MiB/s [2024-11-20T13:49:11.077Z] 11156.00 IOPS, 43.58 MiB/s [2024-11-20T13:49:11.077Z] 11193.80 IOPS, 43.73 MiB/s [2024-11-20T13:49:11.077Z] 11151.00 IOPS, 43.56 MiB/s [2024-11-20T13:49:11.077Z] 11145.14 IOPS, 43.54 MiB/s [2024-11-20T13:49:11.077Z] 11171.12 IOPS, 43.64 MiB/s [2024-11-20T13:49:11.077Z] 11177.78 IOPS, 43.66 MiB/s [2024-11-20T13:49:11.077Z] 11168.10 IOPS, 43.63 MiB/s [2024-11-20T13:49:11.077Z] 11167.27 IOPS, 43.62 MiB/s [2024-11-20T13:49:11.077Z] 11162.92 IOPS, 43.61 MiB/s [2024-11-20T13:49:11.077Z] [2024-11-20 14:48:54.439357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.439987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.439999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.440006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.440018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.440025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.440038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.119 [2024-11-20 14:48:54.440045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.440058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.119 [2024-11-20 14:48:54.440064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.440078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.119 [2024-11-20 14:48:54.440086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.440100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.119 [2024-11-20 14:48:54.440106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.440118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.119 [2024-11-20 14:48:54.440125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:59.119 [2024-11-20 14:48:54.440137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.120 [2024-11-20 14:48:54.440145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.120 [2024-11-20 14:48:54.440164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.120 [2024-11-20 14:48:54.440183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.120 [2024-11-20 14:48:54.440201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.120 [2024-11-20 14:48:54.440730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.440984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.440993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.441008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.441015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.441030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.441036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.441051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.441058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.441073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.441081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.441095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.441102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.441116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.441123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.441138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.441145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.441160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.441166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.441181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.441187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.441202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.441208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.441223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.441230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.441244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.441253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.441268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.441275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.441289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.441296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.441311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.441317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.441332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.441338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.441353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.441360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.441375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.120 [2024-11-20 14:48:54.441382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:59.120 [2024-11-20 14:48:54.441482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.121 [2024-11-20 14:48:54.441491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.121 [2024-11-20 14:48:54.441516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.121 [2024-11-20 14:48:54.441539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.121 [2024-11-20 14:48:54.441562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.121 [2024-11-20 14:48:54.441587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.121 [2024-11-20 14:48:54.441610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.121 [2024-11-20 14:48:54.441635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.441666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.441691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.441715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.441739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.441762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.441785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.441809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.441833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.441857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.441880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.441903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.441931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.441960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.441977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.441984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.442000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.442007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.442023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.121 [2024-11-20 14:48:54.442030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.442047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.442054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.442073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.442080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.442097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.442103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.442120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.442126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.442143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.442150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.442167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.442173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.442190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.442197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.442268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.442279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.442298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.442305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.442323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.442330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.442348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.442354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.442373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.442379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.442398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.442404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.442422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.442429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.442447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.442454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:48:54.442472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.121 [2024-11-20 14:48:54.442479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:59.121 11111.77 IOPS, 43.41 MiB/s [2024-11-20T13:49:11.079Z] 10318.07 IOPS, 40.30 MiB/s [2024-11-20T13:49:11.079Z] 9630.20 IOPS, 37.62 MiB/s [2024-11-20T13:49:11.079Z] 9077.62 IOPS, 35.46 MiB/s [2024-11-20T13:49:11.079Z] 9201.18 IOPS, 35.94 MiB/s [2024-11-20T13:49:11.079Z] 9305.56 IOPS, 36.35 MiB/s [2024-11-20T13:49:11.079Z] 9445.84 IOPS, 36.90 MiB/s [2024-11-20T13:49:11.079Z] 9621.05 IOPS, 37.58 MiB/s [2024-11-20T13:49:11.079Z] 9784.86 IOPS, 38.22 MiB/s [2024-11-20T13:49:11.079Z] 9873.18 IOPS, 38.57 MiB/s [2024-11-20T13:49:11.079Z] 9925.91 IOPS, 38.77 MiB/s [2024-11-20T13:49:11.079Z] 9978.79 IOPS, 38.98 MiB/s [2024-11-20T13:49:11.079Z] 10097.84 IOPS, 39.44 MiB/s [2024-11-20T13:49:11.079Z] 10207.46 IOPS, 39.87 MiB/s [2024-11-20T13:49:11.079Z] [2024-11-20 14:49:08.239483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.121 [2024-11-20 14:49:08.239525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:49:08.239560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.121 [2024-11-20 14:49:08.239569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:49:08.239588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.121 [2024-11-20 14:49:08.239595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:49:08.239608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.121 [2024-11-20 14:49:08.239615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:49:08.239628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.121 [2024-11-20 14:49:08.239635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:49:08.239647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.121 [2024-11-20 14:49:08.239654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:49:08.239667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.121 [2024-11-20 14:49:08.239673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:49:08.239685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.121 [2024-11-20 14:49:08.239692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:49:08.239704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.121 [2024-11-20 14:49:08.239711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:59.121 [2024-11-20 14:49:08.239724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.121 [2024-11-20 14:49:08.239730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.239743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.239750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.239762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.239769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.239781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.239788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.239800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.239806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.239819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.239828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.239840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.239847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.239860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.239867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.239881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.239888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.239902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.239909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.239921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.239928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.239941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.239954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.239967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.239974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.239986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.239993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.240005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.240012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.240024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.240031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.240043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.240050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.240063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.240071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.240084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.240090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.240103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.122 [2024-11-20 14:49:08.240110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.240122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.122 [2024-11-20 14:49:08.240130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.240142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.240149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.240161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.240168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.240180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.240186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.240199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.240206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.240218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.240225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.240238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.240245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.240257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.240264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.240276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.240283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.240295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.240302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.240317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.240324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.240337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.240343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.241225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.241242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.241258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.241265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.241278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.241285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.241297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.241304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.241316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.241323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.241335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.241342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.241354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.241361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.241373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.241380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.241393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.241400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.241413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.241419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.241435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.241446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.241459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.241466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.241479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.241485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.241498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.241505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:59.122 [2024-11-20 14:49:08.241517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.122 [2024-11-20 14:49:08.241524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:59.123 [2024-11-20 14:49:08.241536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.123 [2024-11-20 14:49:08.241543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:59.123 [2024-11-20 14:49:08.241556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.123 [2024-11-20 14:49:08.241562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.123 [2024-11-20 14:49:08.241574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.123 [2024-11-20 14:49:08.241581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:59.123 [2024-11-20 14:49:08.241594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.123 [2024-11-20 14:49:08.241600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:59.123 [2024-11-20 14:49:08.241612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.123 [2024-11-20 14:49:08.241619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:59.123 [2024-11-20 14:49:08.241631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.123 [2024-11-20 14:49:08.241638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:59.123 [2024-11-20 14:49:08.241650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.123 [2024-11-20 14:49:08.241657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:59.123 [2024-11-20 14:49:08.241669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.123 [2024-11-20 14:49:08.241677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:59.123 [2024-11-20 14:49:08.241690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.123 [2024-11-20 14:49:08.241697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:59.123 [2024-11-20 14:49:08.241710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.123 [2024-11-20 14:49:08.241717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:59.123 [2024-11-20 14:49:08.241729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.123 [2024-11-20 14:49:08.241736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:59.123 [2024-11-20 14:49:08.241749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.123 [2024-11-20 14:49:08.241756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:59.123 10289.52 IOPS, 40.19 MiB/s [2024-11-20T13:49:11.081Z] 10319.71 IOPS, 40.31 MiB/s [2024-11-20T13:49:11.081Z] 10346.38 IOPS, 40.42 MiB/s [2024-11-20T13:49:11.081Z] Received shutdown signal, test time was about 29.054391 seconds 00:29:59.123 00:29:59.123 Latency(us) 00:29:59.123 [2024-11-20T13:49:11.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.123 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:59.123 Verification LBA range: start 0x0 length 0x4000 00:29:59.123 Nvme0n1 : 29.05 10344.97 40.41 0.00 0.00 12353.38 302.75 3019898.88 00:29:59.123 [2024-11-20T13:49:11.081Z] =================================================================================================================== 00:29:59.123 [2024-11-20T13:49:11.081Z] Total : 10344.97 40.41 0.00 0.00 12353.38 302.75 3019898.88 00:29:59.123 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:59.123 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:29:59.123 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:59.123 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:29:59.123 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:59.123 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:29:59.123 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:59.123 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:29:59.123 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:59.123 14:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:59.123 rmmod nvme_tcp 00:29:59.123 rmmod nvme_fabrics 00:29:59.123 rmmod nvme_keyring 00:29:59.123 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:59.123 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:29:59.123 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:29:59.123 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1706354 ']' 00:29:59.123 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1706354 00:29:59.123 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1706354 ']' 00:29:59.123 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1706354 00:29:59.123 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:29:59.123 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:59.123 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1706354 00:29:59.381 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:59.381 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:59.381 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1706354' 00:29:59.381 killing process with pid 1706354 00:29:59.381 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1706354 00:29:59.381 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1706354 00:29:59.381 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:59.381 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:59.381 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:59.381 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:29:59.381 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:29:59.381 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:59.381 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:29:59.381 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:59.381 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:59.381 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.381 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.381 14:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.917 14:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:01.917 00:30:01.917 real 0m40.911s 00:30:01.917 user 1m51.086s 00:30:01.917 sys 0m11.684s 00:30:01.917 14:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:01.917 14:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:01.917 ************************************ 00:30:01.917 END TEST nvmf_host_multipath_status 00:30:01.917 ************************************ 00:30:01.917 14:49:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:01.917 14:49:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:01.917 14:49:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:01.917 14:49:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.917 ************************************ 00:30:01.917 START TEST nvmf_discovery_remove_ifc 00:30:01.917 ************************************ 00:30:01.917 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:01.917 * Looking for test storage... 00:30:01.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:01.917 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:01.917 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:01.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.918 --rc genhtml_branch_coverage=1 00:30:01.918 --rc genhtml_function_coverage=1 00:30:01.918 --rc genhtml_legend=1 00:30:01.918 --rc geninfo_all_blocks=1 00:30:01.918 --rc geninfo_unexecuted_blocks=1 00:30:01.918 00:30:01.918 ' 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:01.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.918 --rc genhtml_branch_coverage=1 00:30:01.918 --rc genhtml_function_coverage=1 00:30:01.918 --rc genhtml_legend=1 00:30:01.918 --rc geninfo_all_blocks=1 00:30:01.918 --rc geninfo_unexecuted_blocks=1 00:30:01.918 00:30:01.918 ' 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:01.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.918 --rc genhtml_branch_coverage=1 00:30:01.918 --rc genhtml_function_coverage=1 00:30:01.918 --rc genhtml_legend=1 00:30:01.918 --rc geninfo_all_blocks=1 00:30:01.918 --rc geninfo_unexecuted_blocks=1 00:30:01.918 00:30:01.918 ' 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:01.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.918 --rc genhtml_branch_coverage=1 00:30:01.918 --rc genhtml_function_coverage=1 00:30:01.918 --rc genhtml_legend=1 00:30:01.918 --rc geninfo_all_blocks=1 00:30:01.918 --rc geninfo_unexecuted_blocks=1 00:30:01.918 00:30:01.918 ' 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:01.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:01.918 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:01.919 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:01.919 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:01.919 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:01.919 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:01.919 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:01.919 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.919 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:01.919 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:01.919 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:01.919 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.919 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.919 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.919 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:01.919 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:01.919 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:30:01.919 14:49:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:08.491 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:08.491 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:08.491 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:08.492 Found net devices under 0000:86:00.0: cvl_0_0 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:08.492 Found net devices under 0000:86:00.1: cvl_0_1 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:08.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:30:08.492 00:30:08.492 --- 10.0.0.2 ping statistics --- 00:30:08.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.492 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:30:08.492 00:30:08.492 --- 10.0.0.1 ping statistics --- 00:30:08.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.492 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1715072 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1715072 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1715072 ']' 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:08.492 [2024-11-20 14:49:19.580688] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:30:08.492 [2024-11-20 14:49:19.580736] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.492 [2024-11-20 14:49:19.661134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.492 [2024-11-20 14:49:19.702286] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.492 [2024-11-20 14:49:19.702320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.492 [2024-11-20 14:49:19.702327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:08.492 [2024-11-20 14:49:19.702333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:08.492 [2024-11-20 14:49:19.702338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.492 [2024-11-20 14:49:19.702897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.492 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:08.493 [2024-11-20 14:49:19.850990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.493 [2024-11-20 14:49:19.859161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:08.493 null0 00:30:08.493 [2024-11-20 14:49:19.891134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.493 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.493 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1715260 00:30:08.493 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:08.493 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1715260 /tmp/host.sock 00:30:08.493 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1715260 ']' 00:30:08.493 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:30:08.493 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:08.493 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:08.493 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:08.493 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:08.493 14:49:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:08.493 [2024-11-20 14:49:19.958215] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:30:08.493 [2024-11-20 14:49:19.958258] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1715260 ] 00:30:08.493 [2024-11-20 14:49:20.035510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.493 [2024-11-20 14:49:20.083440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.493 14:49:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:08.493 14:49:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:30:08.493 14:49:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:08.493 14:49:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:08.493 14:49:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.493 14:49:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:08.493 14:49:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.493 14:49:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:08.493 14:49:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.493 14:49:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:08.493 14:49:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.493 14:49:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:08.493 14:49:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.493 14:49:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:09.428 [2024-11-20 14:49:21.207123] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:09.428 [2024-11-20 14:49:21.207143] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:09.428 [2024-11-20 14:49:21.207159] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:09.428 [2024-11-20 14:49:21.293416] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:09.428 [2024-11-20 14:49:21.348021] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:30:09.428 [2024-11-20 14:49:21.348757] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1e6ca10:1 started. 00:30:09.428 [2024-11-20 14:49:21.350111] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:09.428 [2024-11-20 14:49:21.350153] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:09.428 [2024-11-20 14:49:21.350171] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:09.428 [2024-11-20 14:49:21.350183] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:09.428 [2024-11-20 14:49:21.350201] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:09.428 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.428 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:09.428 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:09.428 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:09.428 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:09.428 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.428 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:09.428 [2024-11-20 14:49:21.356173] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1e6ca10 was disconnected and freed. delete nvme_qpair. 00:30:09.428 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:09.428 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:09.428 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.686 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:09.686 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:09.686 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:09.686 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:09.686 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:09.686 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:09.686 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:09.686 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:09.686 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.686 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:09.686 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:09.686 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.686 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:09.686 14:49:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:10.621 14:49:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:10.621 14:49:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:10.621 14:49:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.621 14:49:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:10.621 14:49:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:10.621 14:49:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:10.621 14:49:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:10.621 14:49:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.880 14:49:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:10.880 14:49:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:11.817 14:49:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:11.817 14:49:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:11.817 14:49:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:11.817 14:49:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.817 14:49:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:11.817 14:49:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:11.817 14:49:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:11.817 14:49:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.817 14:49:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:11.817 14:49:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:12.754 14:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:12.754 14:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:12.754 14:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:12.754 14:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.754 14:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:12.754 14:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:12.754 14:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:12.754 14:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.754 14:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:12.754 14:49:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:14.132 14:49:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:14.132 14:49:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:14.132 14:49:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:14.132 14:49:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.132 14:49:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:14.132 14:49:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:14.132 14:49:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:14.132 14:49:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.132 14:49:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:14.132 14:49:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:15.068 14:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:15.068 14:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:15.068 14:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:15.068 14:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:15.068 14:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.068 14:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:15.068 14:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:15.068 14:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.068 [2024-11-20 14:49:26.791746] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:15.068 [2024-11-20 14:49:26.791787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.068 [2024-11-20 14:49:26.791797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 14:49:26.791811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.068 [2024-11-20 14:49:26.791818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 14:49:26.791824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.068 [2024-11-20 14:49:26.791831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 14:49:26.791837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.068 [2024-11-20 14:49:26.791844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 14:49:26.791851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:15.068 [2024-11-20 14:49:26.791857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:15.068 [2024-11-20 14:49:26.791864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e49220 is same with the state(6) to be set 00:30:15.068 14:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:15.068 14:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:15.068 [2024-11-20 14:49:26.801768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e49220 (9): Bad file descriptor 00:30:15.068 [2024-11-20 14:49:26.811802] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:15.068 [2024-11-20 14:49:26.811813] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:15.068 [2024-11-20 14:49:26.811817] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:15.068 [2024-11-20 14:49:26.811822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:15.068 [2024-11-20 14:49:26.811842] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:16.004 14:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:16.004 14:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:16.004 14:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:16.004 14:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.004 14:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:16.004 14:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:16.004 14:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:16.004 [2024-11-20 14:49:27.856063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:16.004 [2024-11-20 14:49:27.856141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e49220 with addr=10.0.0.2, port=4420 00:30:16.004 [2024-11-20 14:49:27.856173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e49220 is same with the state(6) to be set 00:30:16.004 [2024-11-20 14:49:27.856223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e49220 (9): Bad file descriptor 00:30:16.004 [2024-11-20 14:49:27.857165] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:30:16.004 [2024-11-20 14:49:27.857228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:16.004 [2024-11-20 14:49:27.857260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:16.004 [2024-11-20 14:49:27.857283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:16.004 [2024-11-20 14:49:27.857303] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:16.004 [2024-11-20 14:49:27.857319] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:16.004 [2024-11-20 14:49:27.857332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:16.004 [2024-11-20 14:49:27.857355] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:16.004 [2024-11-20 14:49:27.857368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:16.004 14:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.004 14:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:16.004 14:49:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:16.942 [2024-11-20 14:49:28.859889] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:16.942 [2024-11-20 14:49:28.859909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:16.942 [2024-11-20 14:49:28.859919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:16.942 [2024-11-20 14:49:28.859926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:16.942 [2024-11-20 14:49:28.859933] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:30:16.942 [2024-11-20 14:49:28.859939] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:16.942 [2024-11-20 14:49:28.859963] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:16.942 [2024-11-20 14:49:28.859967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:16.942 [2024-11-20 14:49:28.859990] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:16.942 [2024-11-20 14:49:28.860009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:16.942 [2024-11-20 14:49:28.860018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:16.942 [2024-11-20 14:49:28.860027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:16.942 [2024-11-20 14:49:28.860034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:16.942 [2024-11-20 14:49:28.860041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:16.942 [2024-11-20 14:49:28.860048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:16.942 [2024-11-20 14:49:28.860055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:16.942 [2024-11-20 14:49:28.860061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:16.942 [2024-11-20 14:49:28.860069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:16.942 [2024-11-20 14:49:28.860080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:16.942 [2024-11-20 14:49:28.860087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:30:16.942 [2024-11-20 14:49:28.860498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e38900 (9): Bad file descriptor 00:30:16.942 [2024-11-20 14:49:28.861509] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:16.942 [2024-11-20 14:49:28.861520] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:30:16.942 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:16.942 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:16.942 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:16.942 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.942 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:16.942 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:16.942 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:16.942 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.204 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:17.204 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.204 14:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.204 14:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:17.204 14:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:17.204 14:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:17.204 14:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:17.204 14:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.204 14:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:17.204 14:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:17.204 14:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:17.204 14:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.204 14:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:17.204 14:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:18.275 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:18.275 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:18.275 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:18.275 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.275 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:18.275 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:18.275 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:18.275 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.275 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:18.275 14:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:19.213 [2024-11-20 14:49:30.912445] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:19.213 [2024-11-20 14:49:30.912462] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:19.213 [2024-11-20 14:49:30.912474] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:19.213 [2024-11-20 14:49:31.039874] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:19.213 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:19.213 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:19.213 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:19.213 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.213 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:19.213 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:19.213 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:19.213 [2024-11-20 14:49:31.101410] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:30:19.213 [2024-11-20 14:49:31.101926] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1e447a0:1 started. 00:30:19.213 [2024-11-20 14:49:31.102998] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:19.213 [2024-11-20 14:49:31.103031] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:19.213 [2024-11-20 14:49:31.103048] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:19.213 [2024-11-20 14:49:31.103061] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:19.213 [2024-11-20 14:49:31.103069] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:19.213 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.213 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:19.213 14:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:19.213 [2024-11-20 14:49:31.151471] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1e447a0 was disconnected and freed. delete nvme_qpair. 00:30:20.591 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:20.591 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:20.591 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:20.591 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.591 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:20.591 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:20.591 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:20.591 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1715260 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1715260 ']' 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1715260 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1715260 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1715260' 00:30:20.592 killing process with pid 1715260 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1715260 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1715260 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:20.592 rmmod nvme_tcp 00:30:20.592 rmmod nvme_fabrics 00:30:20.592 rmmod nvme_keyring 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1715072 ']' 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1715072 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1715072 ']' 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1715072 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1715072 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1715072' 00:30:20.592 killing process with pid 1715072 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1715072 00:30:20.592 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1715072 00:30:20.851 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:20.851 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:20.851 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:20.851 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:30:20.851 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:30:20.851 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:20.851 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:30:20.851 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:20.851 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:20.851 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.851 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.851 14:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:23.387 00:30:23.387 real 0m21.370s 00:30:23.387 user 0m26.507s 00:30:23.387 sys 0m5.862s 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:23.387 ************************************ 00:30:23.387 END TEST nvmf_discovery_remove_ifc 00:30:23.387 ************************************ 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.387 ************************************ 00:30:23.387 START TEST nvmf_identify_kernel_target 00:30:23.387 ************************************ 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:30:23.387 * Looking for test storage... 00:30:23.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:23.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.387 --rc genhtml_branch_coverage=1 00:30:23.387 --rc genhtml_function_coverage=1 00:30:23.387 --rc genhtml_legend=1 00:30:23.387 --rc geninfo_all_blocks=1 00:30:23.387 --rc geninfo_unexecuted_blocks=1 00:30:23.387 00:30:23.387 ' 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:23.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.387 --rc genhtml_branch_coverage=1 00:30:23.387 --rc genhtml_function_coverage=1 00:30:23.387 --rc genhtml_legend=1 00:30:23.387 --rc geninfo_all_blocks=1 00:30:23.387 --rc geninfo_unexecuted_blocks=1 00:30:23.387 00:30:23.387 ' 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:23.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.387 --rc genhtml_branch_coverage=1 00:30:23.387 --rc genhtml_function_coverage=1 00:30:23.387 --rc genhtml_legend=1 00:30:23.387 --rc geninfo_all_blocks=1 00:30:23.387 --rc geninfo_unexecuted_blocks=1 00:30:23.387 00:30:23.387 ' 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:23.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.387 --rc genhtml_branch_coverage=1 00:30:23.387 --rc genhtml_function_coverage=1 00:30:23.387 --rc genhtml_legend=1 00:30:23.387 --rc geninfo_all_blocks=1 00:30:23.387 --rc geninfo_unexecuted_blocks=1 00:30:23.387 00:30:23.387 ' 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.387 14:49:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:23.387 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:23.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:23.388 14:49:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:30:28.661 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:28.661 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:28.661 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:28.920 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:28.920 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:28.920 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:28.920 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:28.920 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:28.920 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:28.920 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:30:28.920 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:28.920 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:30:28.920 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:28.920 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:30:28.920 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:28.920 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.920 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.920 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.920 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.920 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.920 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:28.921 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:28.921 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:28.921 Found net devices under 0000:86:00.0: cvl_0_0 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:28.921 Found net devices under 0000:86:00.1: cvl_0_1 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:28.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:30:28.921 00:30:28.921 --- 10.0.0.2 ping statistics --- 00:30:28.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.921 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:30:28.921 00:30:28.921 --- 10.0.0.1 ping statistics --- 00:30:28.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.921 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:28.921 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:29.181 14:49:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:31.717 Waiting for block devices as requested 00:30:31.977 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:31.977 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:31.977 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:32.236 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:32.236 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:32.236 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:32.495 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:32.495 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:32.495 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:32.495 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:32.753 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:32.753 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:32.753 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:33.013 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:33.013 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:33.013 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:33.272 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:33.272 No valid GPT data, bailing 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:33.272 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:30:33.533 00:30:33.533 Discovery Log Number of Records 2, Generation counter 2 00:30:33.533 =====Discovery Log Entry 0====== 00:30:33.533 trtype: tcp 00:30:33.533 adrfam: ipv4 00:30:33.533 subtype: current discovery subsystem 00:30:33.533 treq: not specified, sq flow control disable supported 00:30:33.533 portid: 1 00:30:33.533 trsvcid: 4420 00:30:33.533 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:33.533 traddr: 10.0.0.1 00:30:33.533 eflags: none 00:30:33.533 sectype: none 00:30:33.533 =====Discovery Log Entry 1====== 00:30:33.533 trtype: tcp 00:30:33.533 adrfam: ipv4 00:30:33.533 subtype: nvme subsystem 00:30:33.533 treq: not specified, sq flow control disable supported 00:30:33.533 portid: 1 00:30:33.533 trsvcid: 4420 00:30:33.533 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:33.533 traddr: 10.0.0.1 00:30:33.533 eflags: none 00:30:33.533 sectype: none 00:30:33.533 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:30:33.533 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:30:33.533 ===================================================== 00:30:33.533 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:33.533 ===================================================== 00:30:33.533 Controller Capabilities/Features 00:30:33.533 ================================ 00:30:33.533 Vendor ID: 0000 00:30:33.533 Subsystem Vendor ID: 0000 00:30:33.533 Serial Number: 869362fc4f5c8db9fde4 00:30:33.533 Model Number: Linux 00:30:33.533 Firmware Version: 6.8.9-20 00:30:33.533 Recommended Arb Burst: 0 00:30:33.533 IEEE OUI Identifier: 00 00 00 00:30:33.533 Multi-path I/O 00:30:33.533 May have multiple subsystem ports: No 00:30:33.533 May have multiple controllers: No 00:30:33.533 Associated with SR-IOV VF: No 00:30:33.533 Max Data Transfer Size: Unlimited 00:30:33.533 Max Number of Namespaces: 0 00:30:33.533 Max Number of I/O Queues: 1024 00:30:33.533 NVMe Specification Version (VS): 1.3 00:30:33.533 NVMe Specification Version (Identify): 1.3 00:30:33.533 Maximum Queue Entries: 1024 00:30:33.533 Contiguous Queues Required: No 00:30:33.533 Arbitration Mechanisms Supported 00:30:33.533 Weighted Round Robin: Not Supported 00:30:33.533 Vendor Specific: Not Supported 00:30:33.533 Reset Timeout: 7500 ms 00:30:33.533 Doorbell Stride: 4 bytes 00:30:33.533 NVM Subsystem Reset: Not Supported 00:30:33.533 Command Sets Supported 00:30:33.533 NVM Command Set: Supported 00:30:33.533 Boot Partition: Not Supported 00:30:33.533 Memory Page Size Minimum: 4096 bytes 00:30:33.533 Memory Page Size Maximum: 4096 bytes 00:30:33.533 Persistent Memory Region: Not Supported 00:30:33.533 Optional Asynchronous Events Supported 00:30:33.533 Namespace Attribute Notices: Not Supported 00:30:33.533 Firmware Activation Notices: Not Supported 00:30:33.533 ANA Change Notices: Not Supported 00:30:33.533 PLE Aggregate Log Change Notices: Not Supported 00:30:33.533 LBA Status Info Alert Notices: Not Supported 00:30:33.533 EGE Aggregate Log Change Notices: Not Supported 00:30:33.533 Normal NVM Subsystem Shutdown event: Not Supported 00:30:33.533 Zone Descriptor Change Notices: Not Supported 00:30:33.533 Discovery Log Change Notices: Supported 00:30:33.533 Controller Attributes 00:30:33.533 128-bit Host Identifier: Not Supported 00:30:33.533 Non-Operational Permissive Mode: Not Supported 00:30:33.533 NVM Sets: Not Supported 00:30:33.533 Read Recovery Levels: Not Supported 00:30:33.533 Endurance Groups: Not Supported 00:30:33.534 Predictable Latency Mode: Not Supported 00:30:33.534 Traffic Based Keep ALive: Not Supported 00:30:33.534 Namespace Granularity: Not Supported 00:30:33.534 SQ Associations: Not Supported 00:30:33.534 UUID List: Not Supported 00:30:33.534 Multi-Domain Subsystem: Not Supported 00:30:33.534 Fixed Capacity Management: Not Supported 00:30:33.534 Variable Capacity Management: Not Supported 00:30:33.534 Delete Endurance Group: Not Supported 00:30:33.534 Delete NVM Set: Not Supported 00:30:33.534 Extended LBA Formats Supported: Not Supported 00:30:33.534 Flexible Data Placement Supported: Not Supported 00:30:33.534 00:30:33.534 Controller Memory Buffer Support 00:30:33.534 ================================ 00:30:33.534 Supported: No 00:30:33.534 00:30:33.534 Persistent Memory Region Support 00:30:33.534 ================================ 00:30:33.534 Supported: No 00:30:33.534 00:30:33.534 Admin Command Set Attributes 00:30:33.534 ============================ 00:30:33.534 Security Send/Receive: Not Supported 00:30:33.534 Format NVM: Not Supported 00:30:33.534 Firmware Activate/Download: Not Supported 00:30:33.534 Namespace Management: Not Supported 00:30:33.534 Device Self-Test: Not Supported 00:30:33.534 Directives: Not Supported 00:30:33.534 NVMe-MI: Not Supported 00:30:33.534 Virtualization Management: Not Supported 00:30:33.534 Doorbell Buffer Config: Not Supported 00:30:33.534 Get LBA Status Capability: Not Supported 00:30:33.534 Command & Feature Lockdown Capability: Not Supported 00:30:33.534 Abort Command Limit: 1 00:30:33.534 Async Event Request Limit: 1 00:30:33.534 Number of Firmware Slots: N/A 00:30:33.534 Firmware Slot 1 Read-Only: N/A 00:30:33.534 Firmware Activation Without Reset: N/A 00:30:33.534 Multiple Update Detection Support: N/A 00:30:33.534 Firmware Update Granularity: No Information Provided 00:30:33.534 Per-Namespace SMART Log: No 00:30:33.534 Asymmetric Namespace Access Log Page: Not Supported 00:30:33.534 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:33.534 Command Effects Log Page: Not Supported 00:30:33.534 Get Log Page Extended Data: Supported 00:30:33.534 Telemetry Log Pages: Not Supported 00:30:33.534 Persistent Event Log Pages: Not Supported 00:30:33.534 Supported Log Pages Log Page: May Support 00:30:33.534 Commands Supported & Effects Log Page: Not Supported 00:30:33.534 Feature Identifiers & Effects Log Page:May Support 00:30:33.534 NVMe-MI Commands & Effects Log Page: May Support 00:30:33.534 Data Area 4 for Telemetry Log: Not Supported 00:30:33.534 Error Log Page Entries Supported: 1 00:30:33.534 Keep Alive: Not Supported 00:30:33.534 00:30:33.534 NVM Command Set Attributes 00:30:33.534 ========================== 00:30:33.534 Submission Queue Entry Size 00:30:33.534 Max: 1 00:30:33.534 Min: 1 00:30:33.534 Completion Queue Entry Size 00:30:33.534 Max: 1 00:30:33.534 Min: 1 00:30:33.534 Number of Namespaces: 0 00:30:33.534 Compare Command: Not Supported 00:30:33.534 Write Uncorrectable Command: Not Supported 00:30:33.534 Dataset Management Command: Not Supported 00:30:33.534 Write Zeroes Command: Not Supported 00:30:33.534 Set Features Save Field: Not Supported 00:30:33.534 Reservations: Not Supported 00:30:33.534 Timestamp: Not Supported 00:30:33.534 Copy: Not Supported 00:30:33.534 Volatile Write Cache: Not Present 00:30:33.534 Atomic Write Unit (Normal): 1 00:30:33.534 Atomic Write Unit (PFail): 1 00:30:33.534 Atomic Compare & Write Unit: 1 00:30:33.534 Fused Compare & Write: Not Supported 00:30:33.534 Scatter-Gather List 00:30:33.534 SGL Command Set: Supported 00:30:33.534 SGL Keyed: Not Supported 00:30:33.534 SGL Bit Bucket Descriptor: Not Supported 00:30:33.534 SGL Metadata Pointer: Not Supported 00:30:33.534 Oversized SGL: Not Supported 00:30:33.534 SGL Metadata Address: Not Supported 00:30:33.534 SGL Offset: Supported 00:30:33.534 Transport SGL Data Block: Not Supported 00:30:33.534 Replay Protected Memory Block: Not Supported 00:30:33.534 00:30:33.534 Firmware Slot Information 00:30:33.534 ========================= 00:30:33.534 Active slot: 0 00:30:33.534 00:30:33.534 00:30:33.534 Error Log 00:30:33.534 ========= 00:30:33.534 00:30:33.534 Active Namespaces 00:30:33.534 ================= 00:30:33.534 Discovery Log Page 00:30:33.534 ================== 00:30:33.534 Generation Counter: 2 00:30:33.534 Number of Records: 2 00:30:33.534 Record Format: 0 00:30:33.534 00:30:33.534 Discovery Log Entry 0 00:30:33.534 ---------------------- 00:30:33.534 Transport Type: 3 (TCP) 00:30:33.534 Address Family: 1 (IPv4) 00:30:33.534 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:33.534 Entry Flags: 00:30:33.534 Duplicate Returned Information: 0 00:30:33.534 Explicit Persistent Connection Support for Discovery: 0 00:30:33.534 Transport Requirements: 00:30:33.534 Secure Channel: Not Specified 00:30:33.534 Port ID: 1 (0x0001) 00:30:33.534 Controller ID: 65535 (0xffff) 00:30:33.534 Admin Max SQ Size: 32 00:30:33.534 Transport Service Identifier: 4420 00:30:33.534 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:33.534 Transport Address: 10.0.0.1 00:30:33.534 Discovery Log Entry 1 00:30:33.534 ---------------------- 00:30:33.534 Transport Type: 3 (TCP) 00:30:33.534 Address Family: 1 (IPv4) 00:30:33.534 Subsystem Type: 2 (NVM Subsystem) 00:30:33.534 Entry Flags: 00:30:33.534 Duplicate Returned Information: 0 00:30:33.534 Explicit Persistent Connection Support for Discovery: 0 00:30:33.534 Transport Requirements: 00:30:33.534 Secure Channel: Not Specified 00:30:33.534 Port ID: 1 (0x0001) 00:30:33.534 Controller ID: 65535 (0xffff) 00:30:33.534 Admin Max SQ Size: 32 00:30:33.534 Transport Service Identifier: 4420 00:30:33.534 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:30:33.534 Transport Address: 10.0.0.1 00:30:33.534 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:33.534 get_feature(0x01) failed 00:30:33.534 get_feature(0x02) failed 00:30:33.534 get_feature(0x04) failed 00:30:33.534 ===================================================== 00:30:33.534 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:33.534 ===================================================== 00:30:33.534 Controller Capabilities/Features 00:30:33.534 ================================ 00:30:33.534 Vendor ID: 0000 00:30:33.534 Subsystem Vendor ID: 0000 00:30:33.534 Serial Number: 4d5da1e077dd5dee5838 00:30:33.534 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:30:33.534 Firmware Version: 6.8.9-20 00:30:33.534 Recommended Arb Burst: 6 00:30:33.534 IEEE OUI Identifier: 00 00 00 00:30:33.534 Multi-path I/O 00:30:33.534 May have multiple subsystem ports: Yes 00:30:33.534 May have multiple controllers: Yes 00:30:33.534 Associated with SR-IOV VF: No 00:30:33.534 Max Data Transfer Size: Unlimited 00:30:33.534 Max Number of Namespaces: 1024 00:30:33.534 Max Number of I/O Queues: 128 00:30:33.534 NVMe Specification Version (VS): 1.3 00:30:33.534 NVMe Specification Version (Identify): 1.3 00:30:33.534 Maximum Queue Entries: 1024 00:30:33.534 Contiguous Queues Required: No 00:30:33.534 Arbitration Mechanisms Supported 00:30:33.534 Weighted Round Robin: Not Supported 00:30:33.534 Vendor Specific: Not Supported 00:30:33.534 Reset Timeout: 7500 ms 00:30:33.534 Doorbell Stride: 4 bytes 00:30:33.534 NVM Subsystem Reset: Not Supported 00:30:33.534 Command Sets Supported 00:30:33.534 NVM Command Set: Supported 00:30:33.534 Boot Partition: Not Supported 00:30:33.534 Memory Page Size Minimum: 4096 bytes 00:30:33.534 Memory Page Size Maximum: 4096 bytes 00:30:33.534 Persistent Memory Region: Not Supported 00:30:33.534 Optional Asynchronous Events Supported 00:30:33.534 Namespace Attribute Notices: Supported 00:30:33.534 Firmware Activation Notices: Not Supported 00:30:33.534 ANA Change Notices: Supported 00:30:33.534 PLE Aggregate Log Change Notices: Not Supported 00:30:33.535 LBA Status Info Alert Notices: Not Supported 00:30:33.535 EGE Aggregate Log Change Notices: Not Supported 00:30:33.535 Normal NVM Subsystem Shutdown event: Not Supported 00:30:33.535 Zone Descriptor Change Notices: Not Supported 00:30:33.535 Discovery Log Change Notices: Not Supported 00:30:33.535 Controller Attributes 00:30:33.535 128-bit Host Identifier: Supported 00:30:33.535 Non-Operational Permissive Mode: Not Supported 00:30:33.535 NVM Sets: Not Supported 00:30:33.535 Read Recovery Levels: Not Supported 00:30:33.535 Endurance Groups: Not Supported 00:30:33.535 Predictable Latency Mode: Not Supported 00:30:33.535 Traffic Based Keep ALive: Supported 00:30:33.535 Namespace Granularity: Not Supported 00:30:33.535 SQ Associations: Not Supported 00:30:33.535 UUID List: Not Supported 00:30:33.535 Multi-Domain Subsystem: Not Supported 00:30:33.535 Fixed Capacity Management: Not Supported 00:30:33.535 Variable Capacity Management: Not Supported 00:30:33.535 Delete Endurance Group: Not Supported 00:30:33.535 Delete NVM Set: Not Supported 00:30:33.535 Extended LBA Formats Supported: Not Supported 00:30:33.535 Flexible Data Placement Supported: Not Supported 00:30:33.535 00:30:33.535 Controller Memory Buffer Support 00:30:33.535 ================================ 00:30:33.535 Supported: No 00:30:33.535 00:30:33.535 Persistent Memory Region Support 00:30:33.535 ================================ 00:30:33.535 Supported: No 00:30:33.535 00:30:33.535 Admin Command Set Attributes 00:30:33.535 ============================ 00:30:33.535 Security Send/Receive: Not Supported 00:30:33.535 Format NVM: Not Supported 00:30:33.535 Firmware Activate/Download: Not Supported 00:30:33.535 Namespace Management: Not Supported 00:30:33.535 Device Self-Test: Not Supported 00:30:33.535 Directives: Not Supported 00:30:33.535 NVMe-MI: Not Supported 00:30:33.535 Virtualization Management: Not Supported 00:30:33.535 Doorbell Buffer Config: Not Supported 00:30:33.535 Get LBA Status Capability: Not Supported 00:30:33.535 Command & Feature Lockdown Capability: Not Supported 00:30:33.535 Abort Command Limit: 4 00:30:33.535 Async Event Request Limit: 4 00:30:33.535 Number of Firmware Slots: N/A 00:30:33.535 Firmware Slot 1 Read-Only: N/A 00:30:33.535 Firmware Activation Without Reset: N/A 00:30:33.535 Multiple Update Detection Support: N/A 00:30:33.535 Firmware Update Granularity: No Information Provided 00:30:33.535 Per-Namespace SMART Log: Yes 00:30:33.535 Asymmetric Namespace Access Log Page: Supported 00:30:33.535 ANA Transition Time : 10 sec 00:30:33.535 00:30:33.535 Asymmetric Namespace Access Capabilities 00:30:33.535 ANA Optimized State : Supported 00:30:33.535 ANA Non-Optimized State : Supported 00:30:33.535 ANA Inaccessible State : Supported 00:30:33.535 ANA Persistent Loss State : Supported 00:30:33.535 ANA Change State : Supported 00:30:33.535 ANAGRPID is not changed : No 00:30:33.535 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:30:33.535 00:30:33.535 ANA Group Identifier Maximum : 128 00:30:33.535 Number of ANA Group Identifiers : 128 00:30:33.535 Max Number of Allowed Namespaces : 1024 00:30:33.535 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:30:33.535 Command Effects Log Page: Supported 00:30:33.535 Get Log Page Extended Data: Supported 00:30:33.535 Telemetry Log Pages: Not Supported 00:30:33.535 Persistent Event Log Pages: Not Supported 00:30:33.535 Supported Log Pages Log Page: May Support 00:30:33.535 Commands Supported & Effects Log Page: Not Supported 00:30:33.535 Feature Identifiers & Effects Log Page:May Support 00:30:33.535 NVMe-MI Commands & Effects Log Page: May Support 00:30:33.535 Data Area 4 for Telemetry Log: Not Supported 00:30:33.535 Error Log Page Entries Supported: 128 00:30:33.535 Keep Alive: Supported 00:30:33.535 Keep Alive Granularity: 1000 ms 00:30:33.535 00:30:33.535 NVM Command Set Attributes 00:30:33.535 ========================== 00:30:33.535 Submission Queue Entry Size 00:30:33.535 Max: 64 00:30:33.535 Min: 64 00:30:33.535 Completion Queue Entry Size 00:30:33.535 Max: 16 00:30:33.535 Min: 16 00:30:33.535 Number of Namespaces: 1024 00:30:33.535 Compare Command: Not Supported 00:30:33.535 Write Uncorrectable Command: Not Supported 00:30:33.535 Dataset Management Command: Supported 00:30:33.535 Write Zeroes Command: Supported 00:30:33.535 Set Features Save Field: Not Supported 00:30:33.535 Reservations: Not Supported 00:30:33.535 Timestamp: Not Supported 00:30:33.535 Copy: Not Supported 00:30:33.535 Volatile Write Cache: Present 00:30:33.535 Atomic Write Unit (Normal): 1 00:30:33.535 Atomic Write Unit (PFail): 1 00:30:33.535 Atomic Compare & Write Unit: 1 00:30:33.535 Fused Compare & Write: Not Supported 00:30:33.535 Scatter-Gather List 00:30:33.535 SGL Command Set: Supported 00:30:33.535 SGL Keyed: Not Supported 00:30:33.535 SGL Bit Bucket Descriptor: Not Supported 00:30:33.535 SGL Metadata Pointer: Not Supported 00:30:33.535 Oversized SGL: Not Supported 00:30:33.535 SGL Metadata Address: Not Supported 00:30:33.535 SGL Offset: Supported 00:30:33.535 Transport SGL Data Block: Not Supported 00:30:33.535 Replay Protected Memory Block: Not Supported 00:30:33.535 00:30:33.535 Firmware Slot Information 00:30:33.535 ========================= 00:30:33.535 Active slot: 0 00:30:33.535 00:30:33.535 Asymmetric Namespace Access 00:30:33.535 =========================== 00:30:33.535 Change Count : 0 00:30:33.535 Number of ANA Group Descriptors : 1 00:30:33.535 ANA Group Descriptor : 0 00:30:33.535 ANA Group ID : 1 00:30:33.535 Number of NSID Values : 1 00:30:33.535 Change Count : 0 00:30:33.535 ANA State : 1 00:30:33.535 Namespace Identifier : 1 00:30:33.535 00:30:33.535 Commands Supported and Effects 00:30:33.535 ============================== 00:30:33.535 Admin Commands 00:30:33.535 -------------- 00:30:33.535 Get Log Page (02h): Supported 00:30:33.535 Identify (06h): Supported 00:30:33.535 Abort (08h): Supported 00:30:33.535 Set Features (09h): Supported 00:30:33.535 Get Features (0Ah): Supported 00:30:33.535 Asynchronous Event Request (0Ch): Supported 00:30:33.535 Keep Alive (18h): Supported 00:30:33.535 I/O Commands 00:30:33.535 ------------ 00:30:33.535 Flush (00h): Supported 00:30:33.535 Write (01h): Supported LBA-Change 00:30:33.535 Read (02h): Supported 00:30:33.535 Write Zeroes (08h): Supported LBA-Change 00:30:33.535 Dataset Management (09h): Supported 00:30:33.535 00:30:33.535 Error Log 00:30:33.535 ========= 00:30:33.535 Entry: 0 00:30:33.535 Error Count: 0x3 00:30:33.535 Submission Queue Id: 0x0 00:30:33.535 Command Id: 0x5 00:30:33.535 Phase Bit: 0 00:30:33.535 Status Code: 0x2 00:30:33.535 Status Code Type: 0x0 00:30:33.535 Do Not Retry: 1 00:30:33.535 Error Location: 0x28 00:30:33.535 LBA: 0x0 00:30:33.535 Namespace: 0x0 00:30:33.535 Vendor Log Page: 0x0 00:30:33.535 ----------- 00:30:33.535 Entry: 1 00:30:33.535 Error Count: 0x2 00:30:33.535 Submission Queue Id: 0x0 00:30:33.535 Command Id: 0x5 00:30:33.535 Phase Bit: 0 00:30:33.535 Status Code: 0x2 00:30:33.535 Status Code Type: 0x0 00:30:33.535 Do Not Retry: 1 00:30:33.535 Error Location: 0x28 00:30:33.535 LBA: 0x0 00:30:33.535 Namespace: 0x0 00:30:33.535 Vendor Log Page: 0x0 00:30:33.535 ----------- 00:30:33.535 Entry: 2 00:30:33.535 Error Count: 0x1 00:30:33.535 Submission Queue Id: 0x0 00:30:33.535 Command Id: 0x4 00:30:33.535 Phase Bit: 0 00:30:33.535 Status Code: 0x2 00:30:33.535 Status Code Type: 0x0 00:30:33.535 Do Not Retry: 1 00:30:33.535 Error Location: 0x28 00:30:33.535 LBA: 0x0 00:30:33.535 Namespace: 0x0 00:30:33.535 Vendor Log Page: 0x0 00:30:33.535 00:30:33.535 Number of Queues 00:30:33.535 ================ 00:30:33.535 Number of I/O Submission Queues: 128 00:30:33.535 Number of I/O Completion Queues: 128 00:30:33.535 00:30:33.535 ZNS Specific Controller Data 00:30:33.535 ============================ 00:30:33.535 Zone Append Size Limit: 0 00:30:33.535 00:30:33.535 00:30:33.535 Active Namespaces 00:30:33.535 ================= 00:30:33.535 get_feature(0x05) failed 00:30:33.535 Namespace ID:1 00:30:33.535 Command Set Identifier: NVM (00h) 00:30:33.535 Deallocate: Supported 00:30:33.535 Deallocated/Unwritten Error: Not Supported 00:30:33.535 Deallocated Read Value: Unknown 00:30:33.535 Deallocate in Write Zeroes: Not Supported 00:30:33.535 Deallocated Guard Field: 0xFFFF 00:30:33.535 Flush: Supported 00:30:33.536 Reservation: Not Supported 00:30:33.536 Namespace Sharing Capabilities: Multiple Controllers 00:30:33.536 Size (in LBAs): 1953525168 (931GiB) 00:30:33.536 Capacity (in LBAs): 1953525168 (931GiB) 00:30:33.536 Utilization (in LBAs): 1953525168 (931GiB) 00:30:33.536 UUID: 6630d34f-241d-4d76-becc-cdc5cb050355 00:30:33.536 Thin Provisioning: Not Supported 00:30:33.536 Per-NS Atomic Units: Yes 00:30:33.536 Atomic Boundary Size (Normal): 0 00:30:33.536 Atomic Boundary Size (PFail): 0 00:30:33.536 Atomic Boundary Offset: 0 00:30:33.536 NGUID/EUI64 Never Reused: No 00:30:33.536 ANA group ID: 1 00:30:33.536 Namespace Write Protected: No 00:30:33.536 Number of LBA Formats: 1 00:30:33.536 Current LBA Format: LBA Format #00 00:30:33.536 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:33.536 00:30:33.536 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:30:33.536 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:33.536 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:30:33.536 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:33.536 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:30:33.536 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:33.536 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:33.536 rmmod nvme_tcp 00:30:33.536 rmmod nvme_fabrics 00:30:33.536 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:33.796 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:30:33.796 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:30:33.796 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:33.796 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:33.796 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:33.796 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:33.796 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:30:33.796 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:30:33.796 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:33.796 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:30:33.796 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:33.796 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:33.796 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.796 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.796 14:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.704 14:49:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:35.704 14:49:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:30:35.704 14:49:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:35.704 14:49:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:30:35.704 14:49:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:35.704 14:49:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:35.704 14:49:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:35.705 14:49:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:35.705 14:49:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:30:35.705 14:49:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:30:35.705 14:49:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:38.994 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:38.994 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:38.994 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:38.994 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:38.994 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:38.994 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:38.994 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:38.994 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:38.994 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:38.994 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:38.994 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:38.994 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:38.994 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:38.994 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:38.994 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:38.994 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:39.562 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:39.562 00:30:39.562 real 0m16.703s 00:30:39.562 user 0m4.463s 00:30:39.562 sys 0m8.659s 00:30:39.562 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:39.562 14:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:30:39.562 ************************************ 00:30:39.562 END TEST nvmf_identify_kernel_target 00:30:39.562 ************************************ 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.822 ************************************ 00:30:39.822 START TEST nvmf_auth_host 00:30:39.822 ************************************ 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:30:39.822 * Looking for test storage... 00:30:39.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:39.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.822 --rc genhtml_branch_coverage=1 00:30:39.822 --rc genhtml_function_coverage=1 00:30:39.822 --rc genhtml_legend=1 00:30:39.822 --rc geninfo_all_blocks=1 00:30:39.822 --rc geninfo_unexecuted_blocks=1 00:30:39.822 00:30:39.822 ' 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:39.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.822 --rc genhtml_branch_coverage=1 00:30:39.822 --rc genhtml_function_coverage=1 00:30:39.822 --rc genhtml_legend=1 00:30:39.822 --rc geninfo_all_blocks=1 00:30:39.822 --rc geninfo_unexecuted_blocks=1 00:30:39.822 00:30:39.822 ' 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:39.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.822 --rc genhtml_branch_coverage=1 00:30:39.822 --rc genhtml_function_coverage=1 00:30:39.822 --rc genhtml_legend=1 00:30:39.822 --rc geninfo_all_blocks=1 00:30:39.822 --rc geninfo_unexecuted_blocks=1 00:30:39.822 00:30:39.822 ' 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:39.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.822 --rc genhtml_branch_coverage=1 00:30:39.822 --rc genhtml_function_coverage=1 00:30:39.822 --rc genhtml_legend=1 00:30:39.822 --rc geninfo_all_blocks=1 00:30:39.822 --rc geninfo_unexecuted_blocks=1 00:30:39.822 00:30:39.822 ' 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.822 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:39.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:30:39.823 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:46.393 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:46.393 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:46.393 Found net devices under 0000:86:00.0: cvl_0_0 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.393 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:46.393 Found net devices under 0000:86:00.1: cvl_0_1 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:46.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:46.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:30:46.394 00:30:46.394 --- 10.0.0.2 ping statistics --- 00:30:46.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.394 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:46.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:46.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:30:46.394 00:30:46.394 --- 10.0.0.1 ping statistics --- 00:30:46.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.394 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1727192 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1727192 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1727192 ']' 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:30:46.394 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9111e3f6c380a497428aec82426eb56b 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Jb2 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9111e3f6c380a497428aec82426eb56b 0 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9111e3f6c380a497428aec82426eb56b 0 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9111e3f6c380a497428aec82426eb56b 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Jb2 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Jb2 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Jb2 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bdee0a9f4fabac83a4d1563d28aaf6a6b2b65896516e361c1c9510afa7acffd6 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.wUY 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bdee0a9f4fabac83a4d1563d28aaf6a6b2b65896516e361c1c9510afa7acffd6 3 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bdee0a9f4fabac83a4d1563d28aaf6a6b2b65896516e361c1c9510afa7acffd6 3 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bdee0a9f4fabac83a4d1563d28aaf6a6b2b65896516e361c1c9510afa7acffd6 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.wUY 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.wUY 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.wUY 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:46.394 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=21d9bd5d022db89ffdb9d8ce215347bd7c00b826d31bbecb 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.76m 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 21d9bd5d022db89ffdb9d8ce215347bd7c00b826d31bbecb 0 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 21d9bd5d022db89ffdb9d8ce215347bd7c00b826d31bbecb 0 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=21d9bd5d022db89ffdb9d8ce215347bd7c00b826d31bbecb 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.76m 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.76m 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.76m 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6147754d7eae561d21b8c4481b1f3203c54164bd3df65e7d 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.wIj 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6147754d7eae561d21b8c4481b1f3203c54164bd3df65e7d 2 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6147754d7eae561d21b8c4481b1f3203c54164bd3df65e7d 2 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6147754d7eae561d21b8c4481b1f3203c54164bd3df65e7d 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.wIj 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.wIj 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.wIj 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=36c8675842a1c02c840c174d8e6456aa 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Tv6 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 36c8675842a1c02c840c174d8e6456aa 1 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 36c8675842a1c02c840c174d8e6456aa 1 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=36c8675842a1c02c840c174d8e6456aa 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Tv6 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Tv6 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Tv6 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5af28fb9a4bd82c357bf20b1457e4ac9 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.veS 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5af28fb9a4bd82c357bf20b1457e4ac9 1 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5af28fb9a4bd82c357bf20b1457e4ac9 1 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5af28fb9a4bd82c357bf20b1457e4ac9 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:30:46.395 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.veS 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.veS 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.veS 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a64e21a4950efe3cfa35bbea7049eb72a3bb28f95cbc6349 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.9mJ 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a64e21a4950efe3cfa35bbea7049eb72a3bb28f95cbc6349 2 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a64e21a4950efe3cfa35bbea7049eb72a3bb28f95cbc6349 2 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a64e21a4950efe3cfa35bbea7049eb72a3bb28f95cbc6349 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.9mJ 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.9mJ 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.9mJ 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f0bfaa7c3e32d58493082a5a07af0485 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.VGd 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f0bfaa7c3e32d58493082a5a07af0485 0 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f0bfaa7c3e32d58493082a5a07af0485 0 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f0bfaa7c3e32d58493082a5a07af0485 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.VGd 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.VGd 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.VGd 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d2ba471ba2017ee730a070cea51244155b5eccfb2b1dafb8ae43ed47963c5aa3 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.qDP 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d2ba471ba2017ee730a070cea51244155b5eccfb2b1dafb8ae43ed47963c5aa3 3 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d2ba471ba2017ee730a070cea51244155b5eccfb2b1dafb8ae43ed47963c5aa3 3 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d2ba471ba2017ee730a070cea51244155b5eccfb2b1dafb8ae43ed47963c5aa3 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.qDP 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.qDP 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.qDP 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1727192 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1727192 ']' 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:46.655 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.914 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:46.914 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:30:46.914 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:46.914 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Jb2 00:30:46.914 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.914 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.914 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.914 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.wUY ]] 00:30:46.914 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wUY 00:30:46.914 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.914 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.914 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.914 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.76m 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.wIj ]] 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wIj 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Tv6 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.veS ]] 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.veS 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.9mJ 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.VGd ]] 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.VGd 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.qDP 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:46.915 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:50.199 Waiting for block devices as requested 00:30:50.199 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:50.199 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:50.199 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:50.199 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:50.199 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:50.199 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:50.199 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:50.199 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:50.199 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:50.457 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:50.457 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:50.457 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:50.457 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:50.716 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:50.716 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:50.716 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:50.975 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:51.542 No valid GPT data, bailing 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:30:51.542 00:30:51.542 Discovery Log Number of Records 2, Generation counter 2 00:30:51.542 =====Discovery Log Entry 0====== 00:30:51.542 trtype: tcp 00:30:51.542 adrfam: ipv4 00:30:51.542 subtype: current discovery subsystem 00:30:51.542 treq: not specified, sq flow control disable supported 00:30:51.542 portid: 1 00:30:51.542 trsvcid: 4420 00:30:51.542 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:51.542 traddr: 10.0.0.1 00:30:51.542 eflags: none 00:30:51.542 sectype: none 00:30:51.542 =====Discovery Log Entry 1====== 00:30:51.542 trtype: tcp 00:30:51.542 adrfam: ipv4 00:30:51.542 subtype: nvme subsystem 00:30:51.542 treq: not specified, sq flow control disable supported 00:30:51.542 portid: 1 00:30:51.542 trsvcid: 4420 00:30:51.542 subnqn: nqn.2024-02.io.spdk:cnode0 00:30:51.542 traddr: 10.0.0.1 00:30:51.542 eflags: none 00:30:51.542 sectype: none 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: ]] 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:51.542 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.543 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.802 nvme0n1 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: ]] 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.802 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.062 nvme0n1 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: ]] 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.062 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.321 nvme0n1 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: ]] 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.321 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.580 nvme0n1 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: ]] 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:52.580 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.581 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.581 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:52.581 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.581 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:52.581 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:52.581 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:52.581 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:52.581 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.581 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.840 nvme0n1 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:52.840 nvme0n1 00:30:52.840 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: ]] 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.099 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.099 nvme0n1 00:30:53.099 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.099 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.099 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.099 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.099 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: ]] 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.359 nvme0n1 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.359 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: ]] 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:53.618 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:53.619 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.619 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.619 nvme0n1 00:30:53.619 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.619 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.619 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.619 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.619 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.619 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.878 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.878 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:53.878 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.878 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.878 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.878 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:53.878 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: ]] 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.879 nvme0n1 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:53.879 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.138 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.138 nvme0n1 00:30:54.138 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.138 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.138 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:54.138 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.138 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.138 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.138 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.138 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.397 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.397 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.397 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.397 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:54.397 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:54.397 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:30:54.397 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:54.397 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: ]] 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.398 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.656 nvme0n1 00:30:54.656 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.656 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.656 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:54.656 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.656 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.656 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.656 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: ]] 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.657 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.917 nvme0n1 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: ]] 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.917 14:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.177 nvme0n1 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: ]] 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.177 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.436 nvme0n1 00:30:55.436 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.436 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.436 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.436 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.436 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.436 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.694 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.694 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.694 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.694 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.694 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.694 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.694 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:30:55.694 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.694 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:55.694 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:55.694 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.695 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.953 nvme0n1 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: ]] 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.953 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.214 nvme0n1 00:30:56.214 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.214 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:56.214 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.214 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.214 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.214 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: ]] 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.473 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.732 nvme0n1 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: ]] 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.733 14:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.302 nvme0n1 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: ]] 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.302 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.560 nvme0n1 00:30:57.560 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.560 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:57.560 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:57.561 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.561 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.820 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.080 nvme0n1 00:30:58.080 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.080 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:58.080 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:58.080 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.080 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.080 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.080 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:58.080 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:58.080 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.080 14:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: ]] 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.080 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.017 nvme0n1 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: ]] 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.018 14:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.587 nvme0n1 00:30:59.587 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.587 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:59.587 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:59.587 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.587 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.587 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.587 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:59.587 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: ]] 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.588 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.156 nvme0n1 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: ]] 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:00.156 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.157 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.724 nvme0n1 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:00.724 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.725 14:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.293 nvme0n1 00:31:01.293 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.293 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.293 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:01.293 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.293 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.293 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: ]] 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.553 nvme0n1 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.553 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: ]] 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.813 nvme0n1 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: ]] 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.813 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.072 nvme0n1 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: ]] 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.072 14:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.331 nvme0n1 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.331 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.591 nvme0n1 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: ]] 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.591 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.850 nvme0n1 00:31:02.850 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.850 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:02.850 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:02.850 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.850 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.850 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.850 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: ]] 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.851 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.110 nvme0n1 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: ]] 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:03.110 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:03.111 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.111 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.111 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:03.111 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.111 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:03.111 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:03.111 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:03.111 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:03.111 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.111 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.370 nvme0n1 00:31:03.370 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.370 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:03.370 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.370 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.370 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.370 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.370 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.370 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.370 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.370 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.370 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.370 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:03.370 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:03.370 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:03.370 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:03.370 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:03.370 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:03.370 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: ]] 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.371 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.630 nvme0n1 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.630 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.889 nvme0n1 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: ]] 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.889 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.148 nvme0n1 00:31:04.148 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.148 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.148 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.148 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.148 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.148 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.148 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.148 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.148 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.148 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.148 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.148 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.148 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:04.148 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.148 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:04.148 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:04.148 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:04.148 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:04.149 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:04.149 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:04.149 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:04.149 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:04.149 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: ]] 00:31:04.149 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:04.149 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:04.149 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.149 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:04.149 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:04.149 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:04.149 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.149 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:04.149 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.149 14:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.149 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.149 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.149 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:04.149 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:04.149 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:04.149 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.149 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.149 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:04.149 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.149 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:04.149 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:04.149 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:04.149 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:04.149 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.149 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.408 nvme0n1 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: ]] 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.408 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:04.409 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.409 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:04.409 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:04.409 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:04.409 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:04.409 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.409 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.667 nvme0n1 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:04.667 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:04.668 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:04.668 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:04.668 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: ]] 00:31:04.668 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:04.668 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:04.668 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:04.668 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:04.668 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:04.668 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:04.668 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:04.668 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:04.668 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.668 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.927 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.927 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:04.927 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:04.927 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:04.927 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:04.927 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:04.927 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:04.927 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:04.927 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:04.927 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:04.927 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:04.927 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:04.927 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:04.927 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.927 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.186 nvme0n1 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.186 14:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.445 nvme0n1 00:31:05.445 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.445 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:05.445 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:05.445 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.445 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.445 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.445 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.445 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.445 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.445 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.445 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.445 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:05.445 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:05.445 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:05.445 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:05.445 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:05.445 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: ]] 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.446 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.014 nvme0n1 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: ]] 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.014 14:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.274 nvme0n1 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: ]] 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.274 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.862 nvme0n1 00:31:06.862 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.862 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:06.862 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:06.862 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.862 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.862 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.862 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.862 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:06.862 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.862 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.862 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.862 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:06.862 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:06.862 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:06.862 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:06.862 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:06.862 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: ]] 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.863 14:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.182 nvme0n1 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:07.182 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:07.183 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.183 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.183 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:07.183 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.183 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:07.183 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:07.183 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:07.183 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:07.183 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.183 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.782 nvme0n1 00:31:07.782 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.782 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:07.782 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:07.782 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.782 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.782 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.782 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:07.782 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:07.782 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: ]] 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.783 14:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.351 nvme0n1 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: ]] 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.351 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.920 nvme0n1 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: ]] 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.920 14:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.859 nvme0n1 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: ]] 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:09.859 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.860 14:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.428 nvme0n1 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:10.428 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.429 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.997 nvme0n1 00:31:10.997 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.997 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:10.997 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:10.997 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.997 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.997 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.997 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:10.997 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.997 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.997 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.997 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: ]] 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.998 14:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.257 nvme0n1 00:31:11.257 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.257 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.257 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:11.257 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.257 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.257 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.257 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.257 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:11.257 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.257 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.257 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: ]] 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.258 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.518 nvme0n1 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: ]] 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.518 nvme0n1 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.518 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: ]] 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.778 nvme0n1 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.778 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.039 nvme0n1 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: ]] 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.039 14:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.299 nvme0n1 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: ]] 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.299 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.559 nvme0n1 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: ]] 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.559 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.819 nvme0n1 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: ]] 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.819 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.079 nvme0n1 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.079 14:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.079 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.079 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:13.079 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:13.079 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:13.079 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:13.079 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.080 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.080 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:13.080 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.080 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:13.080 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:13.080 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:13.080 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:13.080 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.080 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.339 nvme0n1 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: ]] 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:13.339 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:13.340 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:13.340 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:13.340 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.340 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.340 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.340 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:13.340 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:13.340 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:13.340 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:13.340 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.340 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.340 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:13.340 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.340 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:13.340 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:13.340 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:13.340 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:13.340 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.340 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.599 nvme0n1 00:31:13.599 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.599 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:13.599 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:13.599 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.599 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.599 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.599 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.599 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:13.599 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.599 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.858 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.858 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:13.858 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:13.858 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:13.858 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:13.858 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:13.858 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:13.858 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:13.858 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:13.858 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:13.858 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:13.858 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:13.858 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: ]] 00:31:13.858 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.859 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.118 nvme0n1 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: ]] 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.118 14:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.378 nvme0n1 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: ]] 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.378 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.379 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.379 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:14.379 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:14.379 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:14.379 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:14.379 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.379 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.379 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:14.379 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.379 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:14.379 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:14.379 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:14.379 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:14.379 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.379 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.639 nvme0n1 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.639 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.898 nvme0n1 00:31:14.898 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.898 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:14.898 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:14.898 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.898 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:14.898 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: ]] 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.157 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:15.158 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:15.158 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:15.158 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.158 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.158 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:15.158 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:15.158 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:15.158 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:15.158 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:15.158 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:15.158 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.158 14:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.417 nvme0n1 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: ]] 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:15.417 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:15.676 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:15.676 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:15.676 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:15.676 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.676 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.935 nvme0n1 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: ]] 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.935 14:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.504 nvme0n1 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: ]] 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.504 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.763 nvme0n1 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:16.763 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:16.764 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.764 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.022 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.022 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.022 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:17.022 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:17.022 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:17.023 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.023 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.023 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:17.023 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:17.023 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:17.023 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:17.023 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:17.023 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:17.023 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.023 14:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.281 nvme0n1 00:31:17.281 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.281 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.281 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.281 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.281 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.281 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTExMWUzZjZjMzgwYTQ5NzQyOGFlYzgyNDI2ZWI1NmJ/+ZA2: 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: ]] 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmRlZTBhOWY0ZmFiYWM4M2E0ZDE1NjNkMjhhYWY2YTZiMmI2NTg5NjUxNmUzNjFjMWM5NTEwYWZhN2FjZmZkNiuKpW4=: 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.282 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.850 nvme0n1 00:31:17.850 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.850 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:17.850 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:17.850 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.850 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.850 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: ]] 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.109 14:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.677 nvme0n1 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: ]] 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.677 14:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.245 nvme0n1 00:31:19.245 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.245 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:19.245 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:19.245 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.245 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.245 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.245 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.245 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:19.245 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.245 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.245 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.245 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:19.245 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:31:19.245 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:19.245 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:19.245 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:19.245 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:19.245 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:19.245 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTY0ZTIxYTQ5NTBlZmUzY2ZhMzViYmVhNzA0OWViNzJhM2JiMjhmOTVjYmM2MzQ5uRRoQw==: 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: ]] 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjBiZmFhN2MzZTMyZDU4NDkzMDgyYTVhMDdhZjA0ODXNtwiK: 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.246 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.183 nvme0n1 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDJiYTQ3MWJhMjAxN2VlNzMwYTA3MGNlYTUxMjQ0MTU1YjVlY2NmYjJiMWRhZmI4YWU0M2VkNDc5NjNjNWFhM4yASCc=: 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.183 14:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.752 nvme0n1 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:20.752 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: ]] 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.753 request: 00:31:20.753 { 00:31:20.753 "name": "nvme0", 00:31:20.753 "trtype": "tcp", 00:31:20.753 "traddr": "10.0.0.1", 00:31:20.753 "adrfam": "ipv4", 00:31:20.753 "trsvcid": "4420", 00:31:20.753 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:20.753 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:20.753 "prchk_reftag": false, 00:31:20.753 "prchk_guard": false, 00:31:20.753 "hdgst": false, 00:31:20.753 "ddgst": false, 00:31:20.753 "allow_unrecognized_csi": false, 00:31:20.753 "method": "bdev_nvme_attach_controller", 00:31:20.753 "req_id": 1 00:31:20.753 } 00:31:20.753 Got JSON-RPC error response 00:31:20.753 response: 00:31:20.753 { 00:31:20.753 "code": -5, 00:31:20.753 "message": "Input/output error" 00:31:20.753 } 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.753 request: 00:31:20.753 { 00:31:20.753 "name": "nvme0", 00:31:20.753 "trtype": "tcp", 00:31:20.753 "traddr": "10.0.0.1", 00:31:20.753 "adrfam": "ipv4", 00:31:20.753 "trsvcid": "4420", 00:31:20.753 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:20.753 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:20.753 "prchk_reftag": false, 00:31:20.753 "prchk_guard": false, 00:31:20.753 "hdgst": false, 00:31:20.753 "ddgst": false, 00:31:20.753 "dhchap_key": "key2", 00:31:20.753 "allow_unrecognized_csi": false, 00:31:20.753 "method": "bdev_nvme_attach_controller", 00:31:20.753 "req_id": 1 00:31:20.753 } 00:31:20.753 Got JSON-RPC error response 00:31:20.753 response: 00:31:20.753 { 00:31:20.753 "code": -5, 00:31:20.753 "message": "Input/output error" 00:31:20.753 } 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.753 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:21.013 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.014 request: 00:31:21.014 { 00:31:21.014 "name": "nvme0", 00:31:21.014 "trtype": "tcp", 00:31:21.014 "traddr": "10.0.0.1", 00:31:21.014 "adrfam": "ipv4", 00:31:21.014 "trsvcid": "4420", 00:31:21.014 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:31:21.014 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:31:21.014 "prchk_reftag": false, 00:31:21.014 "prchk_guard": false, 00:31:21.014 "hdgst": false, 00:31:21.014 "ddgst": false, 00:31:21.014 "dhchap_key": "key1", 00:31:21.014 "dhchap_ctrlr_key": "ckey2", 00:31:21.014 "allow_unrecognized_csi": false, 00:31:21.014 "method": "bdev_nvme_attach_controller", 00:31:21.014 "req_id": 1 00:31:21.014 } 00:31:21.014 Got JSON-RPC error response 00:31:21.014 response: 00:31:21.014 { 00:31:21.014 "code": -5, 00:31:21.014 "message": "Input/output error" 00:31:21.014 } 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.014 nvme0n1 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:21.014 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: ]] 00:31:21.273 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:21.273 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:21.273 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.273 14:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.273 request: 00:31:21.273 { 00:31:21.273 "name": "nvme0", 00:31:21.273 "dhchap_key": "key1", 00:31:21.273 "dhchap_ctrlr_key": "ckey2", 00:31:21.273 "method": "bdev_nvme_set_keys", 00:31:21.273 "req_id": 1 00:31:21.273 } 00:31:21.273 Got JSON-RPC error response 00:31:21.273 response: 00:31:21.273 { 00:31:21.273 "code": -13, 00:31:21.273 "message": "Permission denied" 00:31:21.273 } 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:31:21.273 14:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:31:22.651 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:22.651 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:22.651 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.651 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:22.651 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.651 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:31:22.651 14:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjFkOWJkNWQwMjJkYjg5ZmZkYjlkOGNlMjE1MzQ3YmQ3YzAwYjgyNmQzMWJiZWNi3+uM3g==: 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: ]] 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjE0Nzc1NGQ3ZWFlNTYxZDIxYjhjNDQ4MWIxZjMyMDNjNTQxNjRiZDNkZjY1ZTdk5SxMvg==: 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.590 nvme0n1 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzZjODY3NTg0MmExYzAyYzg0MGMxNzRkOGU2NDU2YWGnMbkm: 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: ]] 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWFmMjhmYjlhNGJkODJjMzU3YmYyMGIxNDU3ZTRhYzlHPodS: 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.590 request: 00:31:23.590 { 00:31:23.590 "name": "nvme0", 00:31:23.590 "dhchap_key": "key2", 00:31:23.590 "dhchap_ctrlr_key": "ckey1", 00:31:23.590 "method": "bdev_nvme_set_keys", 00:31:23.590 "req_id": 1 00:31:23.590 } 00:31:23.590 Got JSON-RPC error response 00:31:23.590 response: 00:31:23.590 { 00:31:23.590 "code": -13, 00:31:23.590 "message": "Permission denied" 00:31:23.590 } 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:31:23.590 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:23.591 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:23.591 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:23.591 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:31:23.591 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.591 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:31:23.591 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.591 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.849 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:31:23.849 14:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:24.784 rmmod nvme_tcp 00:31:24.784 rmmod nvme_fabrics 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1727192 ']' 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1727192 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1727192 ']' 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1727192 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1727192 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1727192' 00:31:24.784 killing process with pid 1727192 00:31:24.784 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1727192 00:31:24.785 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1727192 00:31:25.043 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:25.043 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:25.043 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:25.043 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:31:25.043 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:31:25.043 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:25.043 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:31:25.043 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:25.043 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:25.043 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.043 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:25.043 14:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.578 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:27.578 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:27.578 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:27.578 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:31:27.578 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:31:27.578 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:31:27.578 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:27.578 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:27.578 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:27.578 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:27.578 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:31:27.578 14:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:31:27.578 14:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:30.114 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:30.114 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:30.114 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:30.114 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:30.114 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:30.114 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:30.114 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:30.114 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:30.114 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:30.114 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:30.114 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:30.114 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:30.114 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:30.114 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:30.114 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:30.114 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:31.053 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:31:31.053 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Jb2 /tmp/spdk.key-null.76m /tmp/spdk.key-sha256.Tv6 /tmp/spdk.key-sha384.9mJ /tmp/spdk.key-sha512.qDP /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:31:31.053 14:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:34.347 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:31:34.347 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:34.347 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:31:34.347 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:31:34.347 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:31:34.347 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:31:34.347 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:31:34.347 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:31:34.347 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:31:34.347 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:31:34.347 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:31:34.347 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:31:34.347 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:31:34.347 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:31:34.347 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:31:34.347 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:31:34.347 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:31:34.347 00:31:34.347 real 0m54.220s 00:31:34.347 user 0m48.916s 00:31:34.347 sys 0m12.726s 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.347 ************************************ 00:31:34.347 END TEST nvmf_auth_host 00:31:34.347 ************************************ 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.347 ************************************ 00:31:34.347 START TEST nvmf_digest 00:31:34.347 ************************************ 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:34.347 * Looking for test storage... 00:31:34.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:34.347 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:34.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.348 --rc genhtml_branch_coverage=1 00:31:34.348 --rc genhtml_function_coverage=1 00:31:34.348 --rc genhtml_legend=1 00:31:34.348 --rc geninfo_all_blocks=1 00:31:34.348 --rc geninfo_unexecuted_blocks=1 00:31:34.348 00:31:34.348 ' 00:31:34.348 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:34.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.348 --rc genhtml_branch_coverage=1 00:31:34.348 --rc genhtml_function_coverage=1 00:31:34.348 --rc genhtml_legend=1 00:31:34.348 --rc geninfo_all_blocks=1 00:31:34.348 --rc geninfo_unexecuted_blocks=1 00:31:34.348 00:31:34.348 ' 00:31:34.348 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:34.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.348 --rc genhtml_branch_coverage=1 00:31:34.348 --rc genhtml_function_coverage=1 00:31:34.348 --rc genhtml_legend=1 00:31:34.348 --rc geninfo_all_blocks=1 00:31:34.348 --rc geninfo_unexecuted_blocks=1 00:31:34.348 00:31:34.348 ' 00:31:34.348 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:34.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.348 --rc genhtml_branch_coverage=1 00:31:34.348 --rc genhtml_function_coverage=1 00:31:34.348 --rc genhtml_legend=1 00:31:34.348 --rc geninfo_all_blocks=1 00:31:34.348 --rc geninfo_unexecuted_blocks=1 00:31:34.348 00:31:34.348 ' 00:31:34.348 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:34.348 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:31:34.348 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:34.348 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:34.348 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:34.348 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:34.348 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:34.348 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:34.348 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:34.348 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:34.348 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:34.348 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:34.348 14:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:34.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:31:34.348 14:50:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:40.916 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:40.917 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:40.917 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:40.917 Found net devices under 0000:86:00.0: cvl_0_0 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:40.917 Found net devices under 0000:86:00.1: cvl_0_1 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:40.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:40.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:31:40.917 00:31:40.917 --- 10.0.0.2 ping statistics --- 00:31:40.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.917 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:40.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:40.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:31:40.917 00:31:40.917 --- 10.0.0.1 ping statistics --- 00:31:40.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.917 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:40.917 ************************************ 00:31:40.917 START TEST nvmf_digest_clean 00:31:40.917 ************************************ 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1740806 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1740806 00:31:40.917 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:40.918 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1740806 ']' 00:31:40.918 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.918 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:40.918 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.918 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:40.918 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:40.918 [2024-11-20 14:50:52.017645] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:31:40.918 [2024-11-20 14:50:52.017689] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.918 [2024-11-20 14:50:52.097271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.918 [2024-11-20 14:50:52.139512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.918 [2024-11-20 14:50:52.139549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.918 [2024-11-20 14:50:52.139556] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:40.918 [2024-11-20 14:50:52.139562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:40.918 [2024-11-20 14:50:52.139567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.918 [2024-11-20 14:50:52.140137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:40.918 null0 00:31:40.918 [2024-11-20 14:50:52.301898] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.918 [2024-11-20 14:50:52.326127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1740831 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1740831 /var/tmp/bperf.sock 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1740831 ']' 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:40.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:40.918 [2024-11-20 14:50:52.378607] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:31:40.918 [2024-11-20 14:50:52.378648] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1740831 ] 00:31:40.918 [2024-11-20 14:50:52.453871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.918 [2024-11-20 14:50:52.499190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:40.918 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:41.177 nvme0n1 00:31:41.177 14:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:41.177 14:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:41.436 Running I/O for 2 seconds... 00:31:43.309 24710.00 IOPS, 96.52 MiB/s [2024-11-20T13:50:55.267Z] 24851.50 IOPS, 97.08 MiB/s 00:31:43.309 Latency(us) 00:31:43.309 [2024-11-20T13:50:55.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.309 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:43.309 nvme0n1 : 2.00 24878.06 97.18 0.00 0.00 5139.44 2251.02 11739.49 00:31:43.309 [2024-11-20T13:50:55.267Z] =================================================================================================================== 00:31:43.309 [2024-11-20T13:50:55.267Z] Total : 24878.06 97.18 0.00 0.00 5139.44 2251.02 11739.49 00:31:43.309 { 00:31:43.309 "results": [ 00:31:43.309 { 00:31:43.309 "job": "nvme0n1", 00:31:43.309 "core_mask": "0x2", 00:31:43.309 "workload": "randread", 00:31:43.309 "status": "finished", 00:31:43.309 "queue_depth": 128, 00:31:43.309 "io_size": 4096, 00:31:43.309 "runtime": 2.003653, 00:31:43.309 "iops": 24878.060223002685, 00:31:43.309 "mibps": 97.17992274610424, 00:31:43.309 "io_failed": 0, 00:31:43.309 "io_timeout": 0, 00:31:43.309 "avg_latency_us": 5139.442299697945, 00:31:43.309 "min_latency_us": 2251.0191304347827, 00:31:43.309 "max_latency_us": 11739.492173913044 00:31:43.309 } 00:31:43.309 ], 00:31:43.309 "core_count": 1 00:31:43.309 } 00:31:43.309 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:43.309 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:43.309 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:43.309 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:43.309 | select(.opcode=="crc32c") 00:31:43.309 | "\(.module_name) \(.executed)"' 00:31:43.309 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:43.568 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:43.568 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:43.568 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:43.568 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:43.568 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1740831 00:31:43.568 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1740831 ']' 00:31:43.568 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1740831 00:31:43.568 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:43.568 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:43.568 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1740831 00:31:43.568 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:43.568 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:43.568 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1740831' 00:31:43.568 killing process with pid 1740831 00:31:43.568 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1740831 00:31:43.568 Received shutdown signal, test time was about 2.000000 seconds 00:31:43.568 00:31:43.568 Latency(us) 00:31:43.568 [2024-11-20T13:50:55.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.568 [2024-11-20T13:50:55.526Z] =================================================================================================================== 00:31:43.568 [2024-11-20T13:50:55.526Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:43.568 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1740831 00:31:43.827 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:31:43.827 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:43.827 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:43.827 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:31:43.827 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:43.827 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:43.827 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:43.827 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1741293 00:31:43.827 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1741293 /var/tmp/bperf.sock 00:31:43.827 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:43.827 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1741293 ']' 00:31:43.827 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:43.827 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:43.827 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:43.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:43.827 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:43.827 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:43.827 [2024-11-20 14:50:55.680830] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:31:43.827 [2024-11-20 14:50:55.680877] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1741293 ] 00:31:43.827 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:43.827 Zero copy mechanism will not be used. 00:31:43.827 [2024-11-20 14:50:55.754291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.087 [2024-11-20 14:50:55.793079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.087 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:44.087 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:44.087 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:44.087 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:44.087 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:44.346 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:44.346 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:44.606 nvme0n1 00:31:44.606 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:44.606 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:44.606 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:44.606 Zero copy mechanism will not be used. 00:31:44.606 Running I/O for 2 seconds... 00:31:46.922 5825.00 IOPS, 728.12 MiB/s [2024-11-20T13:50:58.880Z] 5759.00 IOPS, 719.88 MiB/s 00:31:46.922 Latency(us) 00:31:46.922 [2024-11-20T13:50:58.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.922 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:46.922 nvme0n1 : 2.00 5761.59 720.20 0.00 0.00 2774.18 712.35 6097.70 00:31:46.922 [2024-11-20T13:50:58.880Z] =================================================================================================================== 00:31:46.922 [2024-11-20T13:50:58.880Z] Total : 5761.59 720.20 0.00 0.00 2774.18 712.35 6097.70 00:31:46.922 { 00:31:46.922 "results": [ 00:31:46.922 { 00:31:46.922 "job": "nvme0n1", 00:31:46.922 "core_mask": "0x2", 00:31:46.922 "workload": "randread", 00:31:46.922 "status": "finished", 00:31:46.922 "queue_depth": 16, 00:31:46.922 "io_size": 131072, 00:31:46.922 "runtime": 2.001878, 00:31:46.922 "iops": 5761.589867114779, 00:31:46.922 "mibps": 720.1987333893474, 00:31:46.922 "io_failed": 0, 00:31:46.922 "io_timeout": 0, 00:31:46.922 "avg_latency_us": 2774.181140371378, 00:31:46.922 "min_latency_us": 712.3478260869565, 00:31:46.922 "max_latency_us": 6097.697391304348 00:31:46.922 } 00:31:46.922 ], 00:31:46.922 "core_count": 1 00:31:46.922 } 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:46.922 | select(.opcode=="crc32c") 00:31:46.922 | "\(.module_name) \(.executed)"' 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1741293 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1741293 ']' 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1741293 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1741293 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1741293' 00:31:46.922 killing process with pid 1741293 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1741293 00:31:46.922 Received shutdown signal, test time was about 2.000000 seconds 00:31:46.922 00:31:46.922 Latency(us) 00:31:46.922 [2024-11-20T13:50:58.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.922 [2024-11-20T13:50:58.880Z] =================================================================================================================== 00:31:46.922 [2024-11-20T13:50:58.880Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:46.922 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1741293 00:31:47.181 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:31:47.181 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:47.181 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:47.181 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:47.181 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:31:47.181 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:31:47.181 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:47.181 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1741902 00:31:47.181 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1741902 /var/tmp/bperf.sock 00:31:47.181 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:47.181 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1741902 ']' 00:31:47.181 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:47.181 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:47.181 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:47.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:47.181 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:47.181 14:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:47.181 [2024-11-20 14:50:58.960751] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:31:47.181 [2024-11-20 14:50:58.960798] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1741902 ] 00:31:47.181 [2024-11-20 14:50:59.038102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.181 [2024-11-20 14:50:59.080538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.181 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:47.181 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:47.181 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:47.181 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:47.181 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:47.439 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:47.439 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:47.697 nvme0n1 00:31:47.956 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:47.956 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:47.956 Running I/O for 2 seconds... 00:31:49.826 26085.00 IOPS, 101.89 MiB/s [2024-11-20T13:51:01.784Z] 26270.50 IOPS, 102.62 MiB/s 00:31:49.826 Latency(us) 00:31:49.826 [2024-11-20T13:51:01.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.826 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:49.826 nvme0n1 : 2.00 26270.54 102.62 0.00 0.00 4863.69 3675.71 9858.89 00:31:49.826 [2024-11-20T13:51:01.784Z] =================================================================================================================== 00:31:49.826 [2024-11-20T13:51:01.784Z] Total : 26270.54 102.62 0.00 0.00 4863.69 3675.71 9858.89 00:31:49.826 { 00:31:49.826 "results": [ 00:31:49.826 { 00:31:49.826 "job": "nvme0n1", 00:31:49.826 "core_mask": "0x2", 00:31:49.826 "workload": "randwrite", 00:31:49.826 "status": "finished", 00:31:49.826 "queue_depth": 128, 00:31:49.826 "io_size": 4096, 00:31:49.826 "runtime": 2.004565, 00:31:49.826 "iops": 26270.537498160447, 00:31:49.826 "mibps": 102.61928710218925, 00:31:49.826 "io_failed": 0, 00:31:49.826 "io_timeout": 0, 00:31:49.826 "avg_latency_us": 4863.693603929317, 00:31:49.826 "min_latency_us": 3675.7147826086957, 00:31:49.826 "max_latency_us": 9858.893913043477 00:31:49.826 } 00:31:49.826 ], 00:31:49.826 "core_count": 1 00:31:49.826 } 00:31:50.084 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:50.084 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:50.084 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:50.084 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:50.084 | select(.opcode=="crc32c") 00:31:50.084 | "\(.module_name) \(.executed)"' 00:31:50.084 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:50.084 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:50.084 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:50.085 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:50.085 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:50.085 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1741902 00:31:50.085 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1741902 ']' 00:31:50.085 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1741902 00:31:50.085 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:50.085 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:50.085 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1741902 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1741902' 00:31:50.345 killing process with pid 1741902 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1741902 00:31:50.345 Received shutdown signal, test time was about 2.000000 seconds 00:31:50.345 00:31:50.345 Latency(us) 00:31:50.345 [2024-11-20T13:51:02.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.345 [2024-11-20T13:51:02.303Z] =================================================================================================================== 00:31:50.345 [2024-11-20T13:51:02.303Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1741902 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1742558 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1742558 /var/tmp/bperf.sock 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1742558 ']' 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:50.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:50.345 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:50.345 [2024-11-20 14:51:02.261550] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:31:50.345 [2024-11-20 14:51:02.261614] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742558 ] 00:31:50.345 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:50.345 Zero copy mechanism will not be used. 00:31:50.604 [2024-11-20 14:51:02.335431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.604 [2024-11-20 14:51:02.373586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.604 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:50.604 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:50.604 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:50.604 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:50.604 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:50.863 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:50.863 14:51:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:51.121 nvme0n1 00:31:51.121 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:51.121 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:51.380 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:51.380 Zero copy mechanism will not be used. 00:31:51.380 Running I/O for 2 seconds... 00:31:53.252 6310.00 IOPS, 788.75 MiB/s [2024-11-20T13:51:05.210Z] 6374.50 IOPS, 796.81 MiB/s 00:31:53.252 Latency(us) 00:31:53.252 [2024-11-20T13:51:05.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.252 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:53.252 nvme0n1 : 2.00 6371.76 796.47 0.00 0.00 2506.75 1267.98 4302.58 00:31:53.252 [2024-11-20T13:51:05.210Z] =================================================================================================================== 00:31:53.252 [2024-11-20T13:51:05.210Z] Total : 6371.76 796.47 0.00 0.00 2506.75 1267.98 4302.58 00:31:53.252 { 00:31:53.252 "results": [ 00:31:53.252 { 00:31:53.252 "job": "nvme0n1", 00:31:53.252 "core_mask": "0x2", 00:31:53.252 "workload": "randwrite", 00:31:53.252 "status": "finished", 00:31:53.252 "queue_depth": 16, 00:31:53.252 "io_size": 131072, 00:31:53.252 "runtime": 2.00337, 00:31:53.252 "iops": 6371.763578370445, 00:31:53.252 "mibps": 796.4704472963057, 00:31:53.252 "io_failed": 0, 00:31:53.252 "io_timeout": 0, 00:31:53.252 "avg_latency_us": 2506.748372690271, 00:31:53.252 "min_latency_us": 1267.9791304347825, 00:31:53.252 "max_latency_us": 4302.580869565218 00:31:53.252 } 00:31:53.252 ], 00:31:53.252 "core_count": 1 00:31:53.252 } 00:31:53.252 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:53.252 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:53.252 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:53.252 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:53.252 | select(.opcode=="crc32c") 00:31:53.252 | "\(.module_name) \(.executed)"' 00:31:53.252 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:53.511 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:53.511 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:53.511 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:53.511 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:53.511 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1742558 00:31:53.511 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1742558 ']' 00:31:53.511 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1742558 00:31:53.511 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:53.511 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:53.511 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1742558 00:31:53.511 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:53.511 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:53.511 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1742558' 00:31:53.511 killing process with pid 1742558 00:31:53.511 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1742558 00:31:53.511 Received shutdown signal, test time was about 2.000000 seconds 00:31:53.511 00:31:53.511 Latency(us) 00:31:53.511 [2024-11-20T13:51:05.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.512 [2024-11-20T13:51:05.470Z] =================================================================================================================== 00:31:53.512 [2024-11-20T13:51:05.470Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:53.512 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1742558 00:31:53.795 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1740806 00:31:53.795 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1740806 ']' 00:31:53.795 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1740806 00:31:53.795 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:53.795 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:53.795 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1740806 00:31:53.795 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:53.795 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:53.795 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1740806' 00:31:53.795 killing process with pid 1740806 00:31:53.795 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1740806 00:31:53.795 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1740806 00:31:54.098 00:31:54.098 real 0m13.852s 00:31:54.098 user 0m26.579s 00:31:54.098 sys 0m4.516s 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:54.098 ************************************ 00:31:54.098 END TEST nvmf_digest_clean 00:31:54.098 ************************************ 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:54.098 ************************************ 00:31:54.098 START TEST nvmf_digest_error 00:31:54.098 ************************************ 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1743082 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1743082 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1743082 ']' 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:54.098 14:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:54.098 [2024-11-20 14:51:05.917211] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:31:54.098 [2024-11-20 14:51:05.917263] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.098 [2024-11-20 14:51:05.996373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.098 [2024-11-20 14:51:06.037213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.098 [2024-11-20 14:51:06.037249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.098 [2024-11-20 14:51:06.037256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:54.098 [2024-11-20 14:51:06.037263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:54.098 [2024-11-20 14:51:06.037268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.098 [2024-11-20 14:51:06.037836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:54.398 [2024-11-20 14:51:06.118324] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:54.398 null0 00:31:54.398 [2024-11-20 14:51:06.216021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:54.398 [2024-11-20 14:51:06.240247] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1743288 00:31:54.398 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1743288 /var/tmp/bperf.sock 00:31:54.399 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:54.399 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1743288 ']' 00:31:54.399 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:54.399 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:54.399 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:54.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:54.399 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:54.399 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:54.399 [2024-11-20 14:51:06.293700] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:31:54.399 [2024-11-20 14:51:06.293741] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743288 ] 00:31:54.657 [2024-11-20 14:51:06.367809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.657 [2024-11-20 14:51:06.409185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.657 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:54.657 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:54.657 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:54.657 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:54.916 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:54.916 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.916 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:54.916 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.916 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:54.916 14:51:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:55.174 nvme0n1 00:31:55.174 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:55.174 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.174 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:55.174 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.174 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:55.174 14:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:55.174 Running I/O for 2 seconds... 00:31:55.433 [2024-11-20 14:51:07.132554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.433 [2024-11-20 14:51:07.132586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.433 [2024-11-20 14:51:07.132596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.433 [2024-11-20 14:51:07.142482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.433 [2024-11-20 14:51:07.142507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.433 [2024-11-20 14:51:07.142516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.433 [2024-11-20 14:51:07.152203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.433 [2024-11-20 14:51:07.152226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.433 [2024-11-20 14:51:07.152234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.433 [2024-11-20 14:51:07.161905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.433 [2024-11-20 14:51:07.161926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.433 [2024-11-20 14:51:07.161935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.433 [2024-11-20 14:51:07.171572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.433 [2024-11-20 14:51:07.171592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.433 [2024-11-20 14:51:07.171600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.433 [2024-11-20 14:51:07.181255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.181276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.181284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.191034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.191054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.191063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.200775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.200796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.200804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.211475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.211497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.211505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.221121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.221142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.221151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.230801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.230822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.230831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.240520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.240542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.240550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.250237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.250260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.250268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.259998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.260021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.260029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.269711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.269733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.269742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.279468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.279491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.279499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.289159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.289184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.289193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.299035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.299057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.299065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.308858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.308879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.308888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.318549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.318570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.318579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.328305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.328326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.328334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.338087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.338108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.338117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.347750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.347771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.347780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.357460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.357481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.357489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.367114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.367135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.367144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.376833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.376853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.376861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.434 [2024-11-20 14:51:07.386553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.434 [2024-11-20 14:51:07.386575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.434 [2024-11-20 14:51:07.386583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.396369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.396392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.396401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.406092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.406114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.406123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.415823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.415844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.415853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.427447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.427468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.427476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.437111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.437131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.437140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.446792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.446811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.446819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.456524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.456546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.456558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.466079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.466100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.466108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.475919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.475939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.475953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.485567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.485588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.485595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.495284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.495305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.495313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.504617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.504638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.504646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.515166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.515186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.515195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.523612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.523633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.523641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.535410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.535431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.535440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.548390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.548418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.548426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.558571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.558593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.558601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.566874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.566895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.566903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.576830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.576851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.576860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.586534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.586556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.586564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.597140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.597161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.597169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.606626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.606656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.606665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.617337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.617358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.617366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.628236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.628258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.628266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.692 [2024-11-20 14:51:07.638363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.692 [2024-11-20 14:51:07.638383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.692 [2024-11-20 14:51:07.638391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.951 [2024-11-20 14:51:07.648944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.951 [2024-11-20 14:51:07.648976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.951 [2024-11-20 14:51:07.648985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.951 [2024-11-20 14:51:07.658193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.951 [2024-11-20 14:51:07.658215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.951 [2024-11-20 14:51:07.658223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.951 [2024-11-20 14:51:07.668400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.951 [2024-11-20 14:51:07.668422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.951 [2024-11-20 14:51:07.668431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.951 [2024-11-20 14:51:07.679179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.951 [2024-11-20 14:51:07.679199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.951 [2024-11-20 14:51:07.679208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.951 [2024-11-20 14:51:07.689017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.951 [2024-11-20 14:51:07.689038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.951 [2024-11-20 14:51:07.689046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.951 [2024-11-20 14:51:07.698052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.951 [2024-11-20 14:51:07.698073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.951 [2024-11-20 14:51:07.698082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.951 [2024-11-20 14:51:07.709445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.951 [2024-11-20 14:51:07.709466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.951 [2024-11-20 14:51:07.709475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.951 [2024-11-20 14:51:07.720521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.951 [2024-11-20 14:51:07.720542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.951 [2024-11-20 14:51:07.720554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.951 [2024-11-20 14:51:07.730446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.951 [2024-11-20 14:51:07.730466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.951 [2024-11-20 14:51:07.730474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.951 [2024-11-20 14:51:07.741196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.951 [2024-11-20 14:51:07.741217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.952 [2024-11-20 14:51:07.741225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.952 [2024-11-20 14:51:07.750468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.952 [2024-11-20 14:51:07.750488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.952 [2024-11-20 14:51:07.750496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.952 [2024-11-20 14:51:07.760168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.952 [2024-11-20 14:51:07.760188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.952 [2024-11-20 14:51:07.760196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.952 [2024-11-20 14:51:07.770284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.952 [2024-11-20 14:51:07.770303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.952 [2024-11-20 14:51:07.770312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.952 [2024-11-20 14:51:07.781383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.952 [2024-11-20 14:51:07.781403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.952 [2024-11-20 14:51:07.781412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.952 [2024-11-20 14:51:07.789867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.952 [2024-11-20 14:51:07.789886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.952 [2024-11-20 14:51:07.789894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.952 [2024-11-20 14:51:07.799634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.952 [2024-11-20 14:51:07.799654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.952 [2024-11-20 14:51:07.799663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.952 [2024-11-20 14:51:07.809708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.952 [2024-11-20 14:51:07.809732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.952 [2024-11-20 14:51:07.809740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.952 [2024-11-20 14:51:07.820412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.952 [2024-11-20 14:51:07.820431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.952 [2024-11-20 14:51:07.820439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.952 [2024-11-20 14:51:07.829514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.952 [2024-11-20 14:51:07.829533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.952 [2024-11-20 14:51:07.829541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.952 [2024-11-20 14:51:07.842115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.952 [2024-11-20 14:51:07.842135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.952 [2024-11-20 14:51:07.842144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.952 [2024-11-20 14:51:07.854210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.952 [2024-11-20 14:51:07.854246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.952 [2024-11-20 14:51:07.854254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.952 [2024-11-20 14:51:07.863355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.952 [2024-11-20 14:51:07.863375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.952 [2024-11-20 14:51:07.863383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.952 [2024-11-20 14:51:07.876368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.952 [2024-11-20 14:51:07.876388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.952 [2024-11-20 14:51:07.876397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.952 [2024-11-20 14:51:07.889483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.952 [2024-11-20 14:51:07.889504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.952 [2024-11-20 14:51:07.889513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.952 [2024-11-20 14:51:07.901305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:55.952 [2024-11-20 14:51:07.901326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.952 [2024-11-20 14:51:07.901338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.211 [2024-11-20 14:51:07.913432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.211 [2024-11-20 14:51:07.913454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.211 [2024-11-20 14:51:07.913463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.211 [2024-11-20 14:51:07.922074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.211 [2024-11-20 14:51:07.922094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.211 [2024-11-20 14:51:07.922102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.211 [2024-11-20 14:51:07.933277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.211 [2024-11-20 14:51:07.933298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.211 [2024-11-20 14:51:07.933306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.211 [2024-11-20 14:51:07.943176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.211 [2024-11-20 14:51:07.943196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.211 [2024-11-20 14:51:07.943205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.211 [2024-11-20 14:51:07.953708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.211 [2024-11-20 14:51:07.953727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.211 [2024-11-20 14:51:07.953735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.211 [2024-11-20 14:51:07.961977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.211 [2024-11-20 14:51:07.961998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.211 [2024-11-20 14:51:07.962006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.211 [2024-11-20 14:51:07.972636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.211 [2024-11-20 14:51:07.972656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.211 [2024-11-20 14:51:07.972664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.211 [2024-11-20 14:51:07.982744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.211 [2024-11-20 14:51:07.982764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.211 [2024-11-20 14:51:07.982772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.211 [2024-11-20 14:51:07.992640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.211 [2024-11-20 14:51:07.992663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.211 [2024-11-20 14:51:07.992671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.211 [2024-11-20 14:51:08.002978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.211 [2024-11-20 14:51:08.002999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.211 [2024-11-20 14:51:08.003007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.211 [2024-11-20 14:51:08.012076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.211 [2024-11-20 14:51:08.012097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.211 [2024-11-20 14:51:08.012105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.211 [2024-11-20 14:51:08.022199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.211 [2024-11-20 14:51:08.022219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.211 [2024-11-20 14:51:08.022227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.211 [2024-11-20 14:51:08.034014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.211 [2024-11-20 14:51:08.034034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.212 [2024-11-20 14:51:08.034042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.212 [2024-11-20 14:51:08.047301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.212 [2024-11-20 14:51:08.047322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.212 [2024-11-20 14:51:08.047331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.212 [2024-11-20 14:51:08.056334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.212 [2024-11-20 14:51:08.056355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.212 [2024-11-20 14:51:08.056363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.212 [2024-11-20 14:51:08.069509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.212 [2024-11-20 14:51:08.069529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.212 [2024-11-20 14:51:08.069538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.212 [2024-11-20 14:51:08.081799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.212 [2024-11-20 14:51:08.081819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.212 [2024-11-20 14:51:08.081827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.212 [2024-11-20 14:51:08.094847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.212 [2024-11-20 14:51:08.094868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.212 [2024-11-20 14:51:08.094876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.212 [2024-11-20 14:51:08.107912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.212 [2024-11-20 14:51:08.107933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.212 [2024-11-20 14:51:08.107941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.212 24631.00 IOPS, 96.21 MiB/s [2024-11-20T13:51:08.170Z] [2024-11-20 14:51:08.120791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.212 [2024-11-20 14:51:08.120811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.212 [2024-11-20 14:51:08.120819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.212 [2024-11-20 14:51:08.129899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.212 [2024-11-20 14:51:08.129921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.212 [2024-11-20 14:51:08.129928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.212 [2024-11-20 14:51:08.143373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.212 [2024-11-20 14:51:08.143393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.212 [2024-11-20 14:51:08.143401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.212 [2024-11-20 14:51:08.154021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.212 [2024-11-20 14:51:08.154041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.212 [2024-11-20 14:51:08.154050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.212 [2024-11-20 14:51:08.164853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.212 [2024-11-20 14:51:08.164878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.212 [2024-11-20 14:51:08.164890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.471 [2024-11-20 14:51:08.177632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.471 [2024-11-20 14:51:08.177654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.471 [2024-11-20 14:51:08.177663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.471 [2024-11-20 14:51:08.185869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.471 [2024-11-20 14:51:08.185888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.471 [2024-11-20 14:51:08.185900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.471 [2024-11-20 14:51:08.198126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.471 [2024-11-20 14:51:08.198147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.471 [2024-11-20 14:51:08.198155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.471 [2024-11-20 14:51:08.210228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.471 [2024-11-20 14:51:08.210248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.471 [2024-11-20 14:51:08.210257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.471 [2024-11-20 14:51:08.219511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.471 [2024-11-20 14:51:08.219532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.471 [2024-11-20 14:51:08.219540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.471 [2024-11-20 14:51:08.232115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.471 [2024-11-20 14:51:08.232137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.471 [2024-11-20 14:51:08.232145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.471 [2024-11-20 14:51:08.243088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.471 [2024-11-20 14:51:08.243109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.471 [2024-11-20 14:51:08.243117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.471 [2024-11-20 14:51:08.252972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.471 [2024-11-20 14:51:08.252992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.471 [2024-11-20 14:51:08.253000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.471 [2024-11-20 14:51:08.262234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.471 [2024-11-20 14:51:08.262255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.471 [2024-11-20 14:51:08.262263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.471 [2024-11-20 14:51:08.271860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.471 [2024-11-20 14:51:08.271880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.471 [2024-11-20 14:51:08.271888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.471 [2024-11-20 14:51:08.280672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.471 [2024-11-20 14:51:08.280696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.471 [2024-11-20 14:51:08.280704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.471 [2024-11-20 14:51:08.291597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.471 [2024-11-20 14:51:08.291617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.471 [2024-11-20 14:51:08.291625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.471 [2024-11-20 14:51:08.302002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.471 [2024-11-20 14:51:08.302022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.471 [2024-11-20 14:51:08.302032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.471 [2024-11-20 14:51:08.310842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.471 [2024-11-20 14:51:08.310862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.472 [2024-11-20 14:51:08.310871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.472 [2024-11-20 14:51:08.320087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.472 [2024-11-20 14:51:08.320107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.472 [2024-11-20 14:51:08.320115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.472 [2024-11-20 14:51:08.330079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.472 [2024-11-20 14:51:08.330099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.472 [2024-11-20 14:51:08.330108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.472 [2024-11-20 14:51:08.341241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.472 [2024-11-20 14:51:08.341261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.472 [2024-11-20 14:51:08.341269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.472 [2024-11-20 14:51:08.350070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.472 [2024-11-20 14:51:08.350091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.472 [2024-11-20 14:51:08.350100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.472 [2024-11-20 14:51:08.362170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.472 [2024-11-20 14:51:08.362190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.472 [2024-11-20 14:51:08.362198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.472 [2024-11-20 14:51:08.371417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.472 [2024-11-20 14:51:08.371438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.472 [2024-11-20 14:51:08.371446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.472 [2024-11-20 14:51:08.382741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.472 [2024-11-20 14:51:08.382762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.472 [2024-11-20 14:51:08.382770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.472 [2024-11-20 14:51:08.395008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.472 [2024-11-20 14:51:08.395030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.472 [2024-11-20 14:51:08.395037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.472 [2024-11-20 14:51:08.405421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.472 [2024-11-20 14:51:08.405441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.472 [2024-11-20 14:51:08.405450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.472 [2024-11-20 14:51:08.415634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.472 [2024-11-20 14:51:08.415654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.472 [2024-11-20 14:51:08.415661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.472 [2024-11-20 14:51:08.424679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.472 [2024-11-20 14:51:08.424701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.472 [2024-11-20 14:51:08.424712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.731 [2024-11-20 14:51:08.437019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.731 [2024-11-20 14:51:08.437042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.731 [2024-11-20 14:51:08.437050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.731 [2024-11-20 14:51:08.446910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.731 [2024-11-20 14:51:08.446931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.731 [2024-11-20 14:51:08.446939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.731 [2024-11-20 14:51:08.456399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.731 [2024-11-20 14:51:08.456423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.731 [2024-11-20 14:51:08.456431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.731 [2024-11-20 14:51:08.467809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.731 [2024-11-20 14:51:08.467831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.731 [2024-11-20 14:51:08.467839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.731 [2024-11-20 14:51:08.476909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.731 [2024-11-20 14:51:08.476930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.731 [2024-11-20 14:51:08.476938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.731 [2024-11-20 14:51:08.487031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.731 [2024-11-20 14:51:08.487052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.731 [2024-11-20 14:51:08.487061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.731 [2024-11-20 14:51:08.496847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.731 [2024-11-20 14:51:08.496868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.731 [2024-11-20 14:51:08.496877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.731 [2024-11-20 14:51:08.507537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.731 [2024-11-20 14:51:08.507559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.731 [2024-11-20 14:51:08.507567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.731 [2024-11-20 14:51:08.515842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.731 [2024-11-20 14:51:08.515863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.732 [2024-11-20 14:51:08.515871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.732 [2024-11-20 14:51:08.528513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.732 [2024-11-20 14:51:08.528534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.732 [2024-11-20 14:51:08.528542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.732 [2024-11-20 14:51:08.540182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.732 [2024-11-20 14:51:08.540203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.732 [2024-11-20 14:51:08.540212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.732 [2024-11-20 14:51:08.549453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.732 [2024-11-20 14:51:08.549474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.732 [2024-11-20 14:51:08.549482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.732 [2024-11-20 14:51:08.560023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.732 [2024-11-20 14:51:08.560043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.732 [2024-11-20 14:51:08.560052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.732 [2024-11-20 14:51:08.571150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.732 [2024-11-20 14:51:08.571172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.732 [2024-11-20 14:51:08.571180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.732 [2024-11-20 14:51:08.580583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.732 [2024-11-20 14:51:08.580603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.732 [2024-11-20 14:51:08.580611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.732 [2024-11-20 14:51:08.590500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.732 [2024-11-20 14:51:08.590520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.732 [2024-11-20 14:51:08.590528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.732 [2024-11-20 14:51:08.599271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.732 [2024-11-20 14:51:08.599291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.732 [2024-11-20 14:51:08.599299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.732 [2024-11-20 14:51:08.610533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.732 [2024-11-20 14:51:08.610553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.732 [2024-11-20 14:51:08.610561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.732 [2024-11-20 14:51:08.620819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.732 [2024-11-20 14:51:08.620840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.732 [2024-11-20 14:51:08.620848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.732 [2024-11-20 14:51:08.630037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.732 [2024-11-20 14:51:08.630058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.732 [2024-11-20 14:51:08.630070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.732 [2024-11-20 14:51:08.642239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.732 [2024-11-20 14:51:08.642261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.732 [2024-11-20 14:51:08.642269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.732 [2024-11-20 14:51:08.652602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.732 [2024-11-20 14:51:08.652625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.732 [2024-11-20 14:51:08.652633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.732 [2024-11-20 14:51:08.664461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.732 [2024-11-20 14:51:08.664482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.732 [2024-11-20 14:51:08.664490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.732 [2024-11-20 14:51:08.673740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.732 [2024-11-20 14:51:08.673761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.732 [2024-11-20 14:51:08.673771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.732 [2024-11-20 14:51:08.682608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.732 [2024-11-20 14:51:08.682633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.732 [2024-11-20 14:51:08.682641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.991 [2024-11-20 14:51:08.695489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.991 [2024-11-20 14:51:08.695512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.991 [2024-11-20 14:51:08.695522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.991 [2024-11-20 14:51:08.707989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.991 [2024-11-20 14:51:08.708011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.991 [2024-11-20 14:51:08.708019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.991 [2024-11-20 14:51:08.716856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.991 [2024-11-20 14:51:08.716877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.991 [2024-11-20 14:51:08.716886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.991 [2024-11-20 14:51:08.727915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.991 [2024-11-20 14:51:08.727940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.991 [2024-11-20 14:51:08.727955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.991 [2024-11-20 14:51:08.738487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.991 [2024-11-20 14:51:08.738508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.991 [2024-11-20 14:51:08.738516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.992 [2024-11-20 14:51:08.748625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.992 [2024-11-20 14:51:08.748648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.992 [2024-11-20 14:51:08.748656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.992 [2024-11-20 14:51:08.760889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.992 [2024-11-20 14:51:08.760910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.992 [2024-11-20 14:51:08.760918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.992 [2024-11-20 14:51:08.772683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.992 [2024-11-20 14:51:08.772705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.992 [2024-11-20 14:51:08.772713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.992 [2024-11-20 14:51:08.784217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.992 [2024-11-20 14:51:08.784239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.992 [2024-11-20 14:51:08.784247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.992 [2024-11-20 14:51:08.793592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.992 [2024-11-20 14:51:08.793612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.992 [2024-11-20 14:51:08.793620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.992 [2024-11-20 14:51:08.802000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.992 [2024-11-20 14:51:08.802021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.992 [2024-11-20 14:51:08.802030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.992 [2024-11-20 14:51:08.811658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.992 [2024-11-20 14:51:08.811679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.992 [2024-11-20 14:51:08.811687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.992 [2024-11-20 14:51:08.822623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.992 [2024-11-20 14:51:08.822644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.992 [2024-11-20 14:51:08.822652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.992 [2024-11-20 14:51:08.831226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.992 [2024-11-20 14:51:08.831246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.992 [2024-11-20 14:51:08.831254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.992 [2024-11-20 14:51:08.840743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.992 [2024-11-20 14:51:08.840764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.992 [2024-11-20 14:51:08.840772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.992 [2024-11-20 14:51:08.851752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.992 [2024-11-20 14:51:08.851773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.992 [2024-11-20 14:51:08.851781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.992 [2024-11-20 14:51:08.864706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.992 [2024-11-20 14:51:08.864727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.992 [2024-11-20 14:51:08.864736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.992 [2024-11-20 14:51:08.877839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.992 [2024-11-20 14:51:08.877860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.992 [2024-11-20 14:51:08.877868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.992 [2024-11-20 14:51:08.889517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.992 [2024-11-20 14:51:08.889539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.992 [2024-11-20 14:51:08.889547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.992 [2024-11-20 14:51:08.899188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.992 [2024-11-20 14:51:08.899208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.992 [2024-11-20 14:51:08.899217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.992 [2024-11-20 14:51:08.912725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.992 [2024-11-20 14:51:08.912747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.992 [2024-11-20 14:51:08.912762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.992 [2024-11-20 14:51:08.921129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.992 [2024-11-20 14:51:08.921149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.992 [2024-11-20 14:51:08.921158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.992 [2024-11-20 14:51:08.931859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.992 [2024-11-20 14:51:08.931879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.992 [2024-11-20 14:51:08.931887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.992 [2024-11-20 14:51:08.943848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:56.992 [2024-11-20 14:51:08.943870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.992 [2024-11-20 14:51:08.943879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.251 [2024-11-20 14:51:08.955012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:57.251 [2024-11-20 14:51:08.955036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.251 [2024-11-20 14:51:08.955044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.251 [2024-11-20 14:51:08.967784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:57.251 [2024-11-20 14:51:08.967807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.251 [2024-11-20 14:51:08.967815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.251 [2024-11-20 14:51:08.979542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:57.251 [2024-11-20 14:51:08.979564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.251 [2024-11-20 14:51:08.979572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.251 [2024-11-20 14:51:08.988103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:57.251 [2024-11-20 14:51:08.988125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.251 [2024-11-20 14:51:08.988133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.251 [2024-11-20 14:51:08.999403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:57.251 [2024-11-20 14:51:08.999424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.251 [2024-11-20 14:51:08.999432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.251 [2024-11-20 14:51:09.010761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:57.251 [2024-11-20 14:51:09.010787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.251 [2024-11-20 14:51:09.010795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.251 [2024-11-20 14:51:09.022705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:57.251 [2024-11-20 14:51:09.022726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.251 [2024-11-20 14:51:09.022734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.251 [2024-11-20 14:51:09.031174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:57.251 [2024-11-20 14:51:09.031195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.251 [2024-11-20 14:51:09.031203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.251 [2024-11-20 14:51:09.041409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:57.251 [2024-11-20 14:51:09.041430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.251 [2024-11-20 14:51:09.041439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.251 [2024-11-20 14:51:09.050055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:57.251 [2024-11-20 14:51:09.050076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.252 [2024-11-20 14:51:09.050083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.252 [2024-11-20 14:51:09.059452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:57.252 [2024-11-20 14:51:09.059472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.252 [2024-11-20 14:51:09.059480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.252 [2024-11-20 14:51:09.069074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:57.252 [2024-11-20 14:51:09.069095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.252 [2024-11-20 14:51:09.069103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.252 [2024-11-20 14:51:09.078382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:57.252 [2024-11-20 14:51:09.078404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.252 [2024-11-20 14:51:09.078412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.252 [2024-11-20 14:51:09.089574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:57.252 [2024-11-20 14:51:09.089596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.252 [2024-11-20 14:51:09.089604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.252 [2024-11-20 14:51:09.100938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:57.252 [2024-11-20 14:51:09.100965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.252 [2024-11-20 14:51:09.100973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.252 [2024-11-20 14:51:09.109219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:57.252 [2024-11-20 14:51:09.109240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.252 [2024-11-20 14:51:09.109248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.252 24449.50 IOPS, 95.51 MiB/s [2024-11-20T13:51:09.210Z] [2024-11-20 14:51:09.119959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf03740) 00:31:57.252 [2024-11-20 14:51:09.119980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.252 [2024-11-20 14:51:09.119988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.252 00:31:57.252 Latency(us) 00:31:57.252 [2024-11-20T13:51:09.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.252 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:57.252 nvme0n1 : 2.00 24449.94 95.51 0.00 0.00 5228.20 2763.91 19261.89 00:31:57.252 [2024-11-20T13:51:09.210Z] =================================================================================================================== 00:31:57.252 [2024-11-20T13:51:09.210Z] Total : 24449.94 95.51 0.00 0.00 5228.20 2763.91 19261.89 00:31:57.252 { 00:31:57.252 "results": [ 00:31:57.252 { 00:31:57.252 "job": "nvme0n1", 00:31:57.252 "core_mask": "0x2", 00:31:57.252 "workload": "randread", 00:31:57.252 "status": "finished", 00:31:57.252 "queue_depth": 128, 00:31:57.252 "io_size": 4096, 00:31:57.252 "runtime": 2.003809, 00:31:57.252 "iops": 24449.935098604707, 00:31:57.252 "mibps": 95.50755897892464, 00:31:57.252 "io_failed": 0, 00:31:57.252 "io_timeout": 0, 00:31:57.252 "avg_latency_us": 5228.197286036426, 00:31:57.252 "min_latency_us": 2763.9095652173914, 00:31:57.252 "max_latency_us": 19261.885217391304 00:31:57.252 } 00:31:57.252 ], 00:31:57.252 "core_count": 1 00:31:57.252 } 00:31:57.252 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:57.252 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:57.252 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:57.252 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:57.252 | .driver_specific 00:31:57.252 | .nvme_error 00:31:57.252 | .status_code 00:31:57.252 | .command_transient_transport_error' 00:31:57.510 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 192 > 0 )) 00:31:57.510 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1743288 00:31:57.510 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1743288 ']' 00:31:57.510 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1743288 00:31:57.510 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:57.510 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:57.510 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1743288 00:31:57.510 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:57.510 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:57.510 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1743288' 00:31:57.510 killing process with pid 1743288 00:31:57.510 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1743288 00:31:57.510 Received shutdown signal, test time was about 2.000000 seconds 00:31:57.510 00:31:57.510 Latency(us) 00:31:57.510 [2024-11-20T13:51:09.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.510 [2024-11-20T13:51:09.468Z] =================================================================================================================== 00:31:57.510 [2024-11-20T13:51:09.468Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:57.510 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1743288 00:31:57.769 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:31:57.769 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:57.769 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:57.769 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:57.769 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:57.769 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1744059 00:31:57.769 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1744059 /var/tmp/bperf.sock 00:31:57.769 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:57.769 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1744059 ']' 00:31:57.769 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:57.769 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:57.769 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:57.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:57.769 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:57.769 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:57.769 [2024-11-20 14:51:09.602935] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:31:57.769 [2024-11-20 14:51:09.602993] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744059 ] 00:31:57.769 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:57.769 Zero copy mechanism will not be used. 00:31:57.769 [2024-11-20 14:51:09.675933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.769 [2024-11-20 14:51:09.718578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:58.028 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:58.028 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:58.028 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:58.028 14:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:58.286 14:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:58.286 14:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.286 14:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:58.286 14:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.286 14:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:58.286 14:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:58.546 nvme0n1 00:31:58.546 14:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:58.546 14:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.546 14:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:58.546 14:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.546 14:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:58.546 14:51:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:58.546 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:58.546 Zero copy mechanism will not be used. 00:31:58.546 Running I/O for 2 seconds... 00:31:58.546 [2024-11-20 14:51:10.452628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.546 [2024-11-20 14:51:10.452669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.546 [2024-11-20 14:51:10.452680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.546 [2024-11-20 14:51:10.458147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.546 [2024-11-20 14:51:10.458174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.546 [2024-11-20 14:51:10.458184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.546 [2024-11-20 14:51:10.463816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.546 [2024-11-20 14:51:10.463839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.546 [2024-11-20 14:51:10.463848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.546 [2024-11-20 14:51:10.469402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.546 [2024-11-20 14:51:10.469424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.546 [2024-11-20 14:51:10.469432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.546 [2024-11-20 14:51:10.475288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.546 [2024-11-20 14:51:10.475316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.546 [2024-11-20 14:51:10.475325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.546 [2024-11-20 14:51:10.480900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.546 [2024-11-20 14:51:10.480923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.546 [2024-11-20 14:51:10.480931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.546 [2024-11-20 14:51:10.486921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.546 [2024-11-20 14:51:10.486943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.546 [2024-11-20 14:51:10.486958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.546 [2024-11-20 14:51:10.492551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.546 [2024-11-20 14:51:10.492572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.546 [2024-11-20 14:51:10.492580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.546 [2024-11-20 14:51:10.495588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.546 [2024-11-20 14:51:10.495608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.546 [2024-11-20 14:51:10.495616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.546 [2024-11-20 14:51:10.501196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.546 [2024-11-20 14:51:10.501220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.546 [2024-11-20 14:51:10.501229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.806 [2024-11-20 14:51:10.506992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.806 [2024-11-20 14:51:10.507015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.507027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.512813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.512835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.512844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.518409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.518430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.518439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.524065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.524088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.524096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.529882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.529905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.529913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.535538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.535561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.535569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.540640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.540662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.540670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.546092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.546114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.546122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.551230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.551251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.551260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.556726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.556747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.556756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.562271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.562292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.562300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.567826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.567846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.567858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.573093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.573115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.573123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.578421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.578443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.578451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.583838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.583860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.583869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.589289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.589311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.589319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.594860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.594882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.594890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.600278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.600300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.600309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.605672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.605693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.605701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.611131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.611153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.611162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.614791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.614816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.614824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.618888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.618910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.618919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.624378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.624399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.624407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.629944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.629972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.629980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.635460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.635481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.807 [2024-11-20 14:51:10.635490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.807 [2024-11-20 14:51:10.640694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.807 [2024-11-20 14:51:10.640716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.640724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.646431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.646452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.646460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.652065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.652087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.652096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.657478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.657499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.657507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.663104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.663126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.663134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.668635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.668655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.668663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.674151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.674172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.674179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.679746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.679768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.679776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.685282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.685302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.685310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.690665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.690686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.690694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.696270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.696291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.696299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.701848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.701869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.701877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.707339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.707364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.707372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.712923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.712945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.712960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.718381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.718404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.718412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.723826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.723848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.723856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.729345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.729367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.729375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.734727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.734749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.734757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.740095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.740117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.740125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.745621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.745643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.745651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.751069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.751089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.751097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:58.808 [2024-11-20 14:51:10.756539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:58.808 [2024-11-20 14:51:10.756561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:58.808 [2024-11-20 14:51:10.756569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.762078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.762104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.762116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.767654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.767677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.767686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.773135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.773157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.773165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.778636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.778657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.778665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.784128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.784151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.784159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.789679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.789701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.789709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.795083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.795104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.795112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.800636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.800658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.800670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.806051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.806073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.806081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.811412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.811433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.811441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.816866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.816887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.816895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.822390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.822412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.822420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.827805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.827826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.827834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.833231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.833251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.833259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.838781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.838802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.838810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.844337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.844359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.844368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.849756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.849781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.849788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.855190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.855211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.855219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.860675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.860696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.860704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.866087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.866108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.866116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.871553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.069 [2024-11-20 14:51:10.871573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.069 [2024-11-20 14:51:10.871581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.069 [2024-11-20 14:51:10.877051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.877072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.877080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.882515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.882535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.882543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.887969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.887990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.887998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.893468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.893488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.893496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.899083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.899104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.899113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.904536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.904557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.904565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.910080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.910102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.910110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.915435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.915456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.915464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.920788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.920809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.920817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.926238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.926258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.926266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.931756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.931777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.931785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.937286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.937307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.937315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.942791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.942812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.942823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.948040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.948061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.948069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.953419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.953440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.953448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.958907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.958928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.958936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.964452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.964473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.964481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.969888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.969909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.969917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.975455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.975476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.975484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.980880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.980901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.980909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.986294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.986315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.986323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.991857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.991878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.991886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:10.997318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:10.997340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:10.997348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:11.002796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:11.002818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:11.002826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:11.008594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:11.008616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:11.008623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:11.014204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:11.014226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.070 [2024-11-20 14:51:11.014234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.070 [2024-11-20 14:51:11.019759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.070 [2024-11-20 14:51:11.019781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.071 [2024-11-20 14:51:11.019789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.335 [2024-11-20 14:51:11.025417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.335 [2024-11-20 14:51:11.025440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.335 [2024-11-20 14:51:11.025449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.335 [2024-11-20 14:51:11.030919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.335 [2024-11-20 14:51:11.030942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.335 [2024-11-20 14:51:11.030958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.335 [2024-11-20 14:51:11.036349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.335 [2024-11-20 14:51:11.036371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.335 [2024-11-20 14:51:11.036383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.335 [2024-11-20 14:51:11.041838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.335 [2024-11-20 14:51:11.041859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.335 [2024-11-20 14:51:11.041867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.335 [2024-11-20 14:51:11.047324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.335 [2024-11-20 14:51:11.047344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.335 [2024-11-20 14:51:11.047352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.335 [2024-11-20 14:51:11.053265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.335 [2024-11-20 14:51:11.053287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.335 [2024-11-20 14:51:11.053295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.335 [2024-11-20 14:51:11.059708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.335 [2024-11-20 14:51:11.059729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.335 [2024-11-20 14:51:11.059737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.335 [2024-11-20 14:51:11.065155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.335 [2024-11-20 14:51:11.065176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.335 [2024-11-20 14:51:11.065184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.335 [2024-11-20 14:51:11.070688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.335 [2024-11-20 14:51:11.070709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.335 [2024-11-20 14:51:11.070717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.335 [2024-11-20 14:51:11.076205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.335 [2024-11-20 14:51:11.076227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.335 [2024-11-20 14:51:11.076235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.335 [2024-11-20 14:51:11.081618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.335 [2024-11-20 14:51:11.081639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.335 [2024-11-20 14:51:11.081648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.335 [2024-11-20 14:51:11.087084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.335 [2024-11-20 14:51:11.087109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.335 [2024-11-20 14:51:11.087117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.335 [2024-11-20 14:51:11.092674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.335 [2024-11-20 14:51:11.092695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.335 [2024-11-20 14:51:11.092703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.335 [2024-11-20 14:51:11.098208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.098229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.098236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.103779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.103801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.103809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.109469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.109492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.109500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.114960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.114981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.114989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.120433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.120454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.120462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.125699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.125719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.125727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.131084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.131105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.131114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.136535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.136556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.136563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.142095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.142117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.142125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.147743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.147764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.147773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.153659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.153680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.153689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.160401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.160422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.160430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.168274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.168296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.168304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.176220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.176244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.176253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.184255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.184277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.184287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.192719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.192741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.192753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.200729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.200753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.200762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.208559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.208582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.208590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.216634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.216658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.216666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.224871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.224895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.224904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.232733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.232756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.232764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.240932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.240961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.240970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.249373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.249396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.249404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.257443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.257467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.257476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.265850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.265877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.265886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.273368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.273391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.273400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.280109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.280132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.280140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.336 [2024-11-20 14:51:11.287277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.336 [2024-11-20 14:51:11.287301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.336 [2024-11-20 14:51:11.287310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.598 [2024-11-20 14:51:11.292812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.598 [2024-11-20 14:51:11.292836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.598 [2024-11-20 14:51:11.292845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.598 [2024-11-20 14:51:11.298138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.598 [2024-11-20 14:51:11.298160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.598 [2024-11-20 14:51:11.298169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.598 [2024-11-20 14:51:11.303341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.598 [2024-11-20 14:51:11.303364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.598 [2024-11-20 14:51:11.303372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.598 [2024-11-20 14:51:11.308576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.598 [2024-11-20 14:51:11.308597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.598 [2024-11-20 14:51:11.308605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.598 [2024-11-20 14:51:11.313870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.598 [2024-11-20 14:51:11.313891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.598 [2024-11-20 14:51:11.313900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.598 [2024-11-20 14:51:11.319177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.598 [2024-11-20 14:51:11.319199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.598 [2024-11-20 14:51:11.319208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.598 [2024-11-20 14:51:11.324564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.598 [2024-11-20 14:51:11.324586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.598 [2024-11-20 14:51:11.324595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.598 [2024-11-20 14:51:11.329845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.598 [2024-11-20 14:51:11.329867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.598 [2024-11-20 14:51:11.329875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.598 [2024-11-20 14:51:11.335166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.598 [2024-11-20 14:51:11.335187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.598 [2024-11-20 14:51:11.335196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.598 [2024-11-20 14:51:11.340441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.598 [2024-11-20 14:51:11.340462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.598 [2024-11-20 14:51:11.340471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.598 [2024-11-20 14:51:11.345841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.598 [2024-11-20 14:51:11.345862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.598 [2024-11-20 14:51:11.345870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.598 [2024-11-20 14:51:11.351134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.598 [2024-11-20 14:51:11.351155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.598 [2024-11-20 14:51:11.351164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.598 [2024-11-20 14:51:11.356372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.598 [2024-11-20 14:51:11.356393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.598 [2024-11-20 14:51:11.356401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.598 [2024-11-20 14:51:11.361645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.598 [2024-11-20 14:51:11.361666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.598 [2024-11-20 14:51:11.361678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.598 [2024-11-20 14:51:11.366976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.598 [2024-11-20 14:51:11.366998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.598 [2024-11-20 14:51:11.367005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.598 [2024-11-20 14:51:11.372313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.598 [2024-11-20 14:51:11.372334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.598 [2024-11-20 14:51:11.372342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.598 [2024-11-20 14:51:11.377665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.598 [2024-11-20 14:51:11.377688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.598 [2024-11-20 14:51:11.377696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.598 [2024-11-20 14:51:11.383000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.598 [2024-11-20 14:51:11.383020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.598 [2024-11-20 14:51:11.383028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.598 [2024-11-20 14:51:11.388343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.598 [2024-11-20 14:51:11.388364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.598 [2024-11-20 14:51:11.388372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.393657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.393678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.393686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.399003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.399024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.399032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.404400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.404422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.404430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.409701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.409723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.409731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.414975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.414995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.415003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.420233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.420254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.420262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.425603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.425624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.425632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.430909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.430930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.430937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.436234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.436254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.436262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.441602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.441623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.441631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.446991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.447013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.447020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.599 5456.00 IOPS, 682.00 MiB/s [2024-11-20T13:51:11.557Z] [2024-11-20 14:51:11.453521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.453543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.453554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.459063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.459085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.459093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.464802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.464824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.464832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.470763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.470785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.470794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.476242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.476265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.476273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.481683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.481704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.481712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.486970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.486991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.487000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.492414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.492437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.492446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.497805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.497827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.599 [2024-11-20 14:51:11.497836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.599 [2024-11-20 14:51:11.503183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.599 [2024-11-20 14:51:11.503209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.600 [2024-11-20 14:51:11.503218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.600 [2024-11-20 14:51:11.508757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.600 [2024-11-20 14:51:11.508780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.600 [2024-11-20 14:51:11.508788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.600 [2024-11-20 14:51:11.514523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.600 [2024-11-20 14:51:11.514545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.600 [2024-11-20 14:51:11.514553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.600 [2024-11-20 14:51:11.520226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.600 [2024-11-20 14:51:11.520248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.600 [2024-11-20 14:51:11.520257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.600 [2024-11-20 14:51:11.526279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.600 [2024-11-20 14:51:11.526302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.600 [2024-11-20 14:51:11.526310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.600 [2024-11-20 14:51:11.532524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.600 [2024-11-20 14:51:11.532546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.600 [2024-11-20 14:51:11.532555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.600 [2024-11-20 14:51:11.537994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.600 [2024-11-20 14:51:11.538016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.600 [2024-11-20 14:51:11.538024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.600 [2024-11-20 14:51:11.543398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.600 [2024-11-20 14:51:11.543421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.600 [2024-11-20 14:51:11.543429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.600 [2024-11-20 14:51:11.548846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.600 [2024-11-20 14:51:11.548869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.600 [2024-11-20 14:51:11.548878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.860 [2024-11-20 14:51:11.554262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.860 [2024-11-20 14:51:11.554287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.860 [2024-11-20 14:51:11.554296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.860 [2024-11-20 14:51:11.559786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.860 [2024-11-20 14:51:11.559810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.860 [2024-11-20 14:51:11.559818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.860 [2024-11-20 14:51:11.565155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.860 [2024-11-20 14:51:11.565177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.860 [2024-11-20 14:51:11.565185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.860 [2024-11-20 14:51:11.570557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.860 [2024-11-20 14:51:11.570579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.860 [2024-11-20 14:51:11.570587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.860 [2024-11-20 14:51:11.575810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.860 [2024-11-20 14:51:11.575832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.860 [2024-11-20 14:51:11.575840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.860 [2024-11-20 14:51:11.579333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.860 [2024-11-20 14:51:11.579353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.860 [2024-11-20 14:51:11.579361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.860 [2024-11-20 14:51:11.583180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.860 [2024-11-20 14:51:11.583202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.860 [2024-11-20 14:51:11.583210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.860 [2024-11-20 14:51:11.588374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.860 [2024-11-20 14:51:11.588394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.860 [2024-11-20 14:51:11.588402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.860 [2024-11-20 14:51:11.593651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.860 [2024-11-20 14:51:11.593672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.860 [2024-11-20 14:51:11.593684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.598805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.598827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.598834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.604118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.604141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.604149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.609417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.609439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.609447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.614537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.614559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.614570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.619930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.619960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.619969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.625287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.625309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.625318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.630676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.630698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.630706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.636042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.636064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.636071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.641432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.641454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.641463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.646836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.646857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.646865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.652212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.652234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.652242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.657561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.657582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.657590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.662993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.663015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.663023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.668442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.668464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.668472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.673796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.673818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.673826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.679234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.679256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.679264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.684564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.684585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.684597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.689991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.690013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.690021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.695344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.695366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.695375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.700708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.700730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.700738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.706001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.706023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.706030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.711344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.711365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.711373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.716614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.716636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.716644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.721932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.721961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.721970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.727382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.727403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.727412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.732768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.861 [2024-11-20 14:51:11.732794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.861 [2024-11-20 14:51:11.732802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.861 [2024-11-20 14:51:11.737801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.862 [2024-11-20 14:51:11.737823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.862 [2024-11-20 14:51:11.737831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.862 [2024-11-20 14:51:11.743237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.862 [2024-11-20 14:51:11.743258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.862 [2024-11-20 14:51:11.743266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.862 [2024-11-20 14:51:11.748741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.862 [2024-11-20 14:51:11.748763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.862 [2024-11-20 14:51:11.748771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.862 [2024-11-20 14:51:11.754197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.862 [2024-11-20 14:51:11.754220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.862 [2024-11-20 14:51:11.754228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.862 [2024-11-20 14:51:11.759632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.862 [2024-11-20 14:51:11.759653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.862 [2024-11-20 14:51:11.759661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.862 [2024-11-20 14:51:11.764990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.862 [2024-11-20 14:51:11.765012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.862 [2024-11-20 14:51:11.765020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.862 [2024-11-20 14:51:11.770414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.862 [2024-11-20 14:51:11.770436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.862 [2024-11-20 14:51:11.770444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.862 [2024-11-20 14:51:11.775843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.862 [2024-11-20 14:51:11.775865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.862 [2024-11-20 14:51:11.775873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.862 [2024-11-20 14:51:11.781195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.862 [2024-11-20 14:51:11.781218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.862 [2024-11-20 14:51:11.781226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.862 [2024-11-20 14:51:11.786560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.862 [2024-11-20 14:51:11.786581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.862 [2024-11-20 14:51:11.786589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.862 [2024-11-20 14:51:11.792067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.862 [2024-11-20 14:51:11.792089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.862 [2024-11-20 14:51:11.792097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.862 [2024-11-20 14:51:11.797392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.862 [2024-11-20 14:51:11.797413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.862 [2024-11-20 14:51:11.797422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.862 [2024-11-20 14:51:11.802642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.862 [2024-11-20 14:51:11.802664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.862 [2024-11-20 14:51:11.802672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.862 [2024-11-20 14:51:11.808003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.862 [2024-11-20 14:51:11.808023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.862 [2024-11-20 14:51:11.808032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.862 [2024-11-20 14:51:11.813424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:31:59.862 [2024-11-20 14:51:11.813447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.862 [2024-11-20 14:51:11.813455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.123 [2024-11-20 14:51:11.818848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.123 [2024-11-20 14:51:11.818870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.123 [2024-11-20 14:51:11.818880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.123 [2024-11-20 14:51:11.824220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.123 [2024-11-20 14:51:11.824243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.123 [2024-11-20 14:51:11.824254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.123 [2024-11-20 14:51:11.829792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.123 [2024-11-20 14:51:11.829815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.123 [2024-11-20 14:51:11.829823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.123 [2024-11-20 14:51:11.835125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.123 [2024-11-20 14:51:11.835148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.123 [2024-11-20 14:51:11.835156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.123 [2024-11-20 14:51:11.840520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.123 [2024-11-20 14:51:11.840542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.123 [2024-11-20 14:51:11.840550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.123 [2024-11-20 14:51:11.846019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.123 [2024-11-20 14:51:11.846041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.123 [2024-11-20 14:51:11.846049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.123 [2024-11-20 14:51:11.851400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.123 [2024-11-20 14:51:11.851422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.123 [2024-11-20 14:51:11.851430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.123 [2024-11-20 14:51:11.856814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.123 [2024-11-20 14:51:11.856835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.856843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.862195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.862216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.862224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.867509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.867531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.867539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.872891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.872918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.872925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.878336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.878358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.878368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.883835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.883856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.883864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.889268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.889289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.889297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.894811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.894831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.894839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.900381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.900402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.900410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.905831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.905852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.905861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.911310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.911332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.911340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.916627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.916649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.916657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.921984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.922007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.922015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.927706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.927728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.927736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.933431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.933452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.933460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.940109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.940131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.940139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.945774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.945796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.945803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.951174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.951194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.951203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.956511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.956532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.956540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.961965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.961986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.961995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.967289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.124 [2024-11-20 14:51:11.967310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.124 [2024-11-20 14:51:11.967321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.124 [2024-11-20 14:51:11.972753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:11.972775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:11.972782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.125 [2024-11-20 14:51:11.978225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:11.978246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:11.978254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.125 [2024-11-20 14:51:11.983572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:11.983593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:11.983601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.125 [2024-11-20 14:51:11.988977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:11.988997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:11.989006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.125 [2024-11-20 14:51:11.994384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:11.994405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:11.994413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.125 [2024-11-20 14:51:11.999816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:11.999836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:11.999845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.125 [2024-11-20 14:51:12.005195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:12.005216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:12.005224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.125 [2024-11-20 14:51:12.010582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:12.010604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:12.010612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.125 [2024-11-20 14:51:12.015979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:12.015999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:12.016007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.125 [2024-11-20 14:51:12.021188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:12.021209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:12.021216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.125 [2024-11-20 14:51:12.026514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:12.026535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:12.026543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.125 [2024-11-20 14:51:12.031834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:12.031854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:12.031862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.125 [2024-11-20 14:51:12.037151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:12.037173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:12.037180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.125 [2024-11-20 14:51:12.042425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:12.042446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:12.042455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.125 [2024-11-20 14:51:12.047760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:12.047781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:12.047789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.125 [2024-11-20 14:51:12.053115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:12.053137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:12.053145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.125 [2024-11-20 14:51:12.058521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:12.058542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:12.058553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.125 [2024-11-20 14:51:12.063837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:12.063858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:12.063866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.125 [2024-11-20 14:51:12.069155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:12.069176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:12.069183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.125 [2024-11-20 14:51:12.074484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.125 [2024-11-20 14:51:12.074507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.125 [2024-11-20 14:51:12.074515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.386 [2024-11-20 14:51:12.079989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.386 [2024-11-20 14:51:12.080012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.386 [2024-11-20 14:51:12.080021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.386 [2024-11-20 14:51:12.085389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.386 [2024-11-20 14:51:12.085411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.386 [2024-11-20 14:51:12.085419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.386 [2024-11-20 14:51:12.090775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.386 [2024-11-20 14:51:12.090796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.386 [2024-11-20 14:51:12.090805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.386 [2024-11-20 14:51:12.096132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.386 [2024-11-20 14:51:12.096153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.386 [2024-11-20 14:51:12.096161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.386 [2024-11-20 14:51:12.101431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.386 [2024-11-20 14:51:12.101453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.386 [2024-11-20 14:51:12.101460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.106740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.106767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.106776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.112104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.112129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.112138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.117449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.117470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.117478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.122814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.122835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.122843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.128171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.128191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.128200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.133397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.133417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.133425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.138755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.138775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.138783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.144127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.144148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.144156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.149551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.149571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.149580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.154836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.154858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.154866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.160139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.160160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.160168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.165413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.165434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.165441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.170682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.170703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.170710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.176039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.176061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.176068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.181353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.181374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.181382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.186684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.186705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.186713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.191978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.191999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.192007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.197354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.197375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.197386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.202605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.202626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.202633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.207955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.207976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.207984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.213354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.213375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.213382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.218593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.218614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.218622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.224012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.224033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.224040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.229360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.229382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.229391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.234673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.234694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.234702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.240022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.240043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.387 [2024-11-20 14:51:12.240051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.387 [2024-11-20 14:51:12.245287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.387 [2024-11-20 14:51:12.245312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.388 [2024-11-20 14:51:12.245320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.388 [2024-11-20 14:51:12.250626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.388 [2024-11-20 14:51:12.250647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.388 [2024-11-20 14:51:12.250655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.388 [2024-11-20 14:51:12.255881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.388 [2024-11-20 14:51:12.255902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.388 [2024-11-20 14:51:12.255909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.388 [2024-11-20 14:51:12.261195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.388 [2024-11-20 14:51:12.261215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.388 [2024-11-20 14:51:12.261224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.388 [2024-11-20 14:51:12.266450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.388 [2024-11-20 14:51:12.266471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.388 [2024-11-20 14:51:12.266479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.388 [2024-11-20 14:51:12.271676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.388 [2024-11-20 14:51:12.271697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.388 [2024-11-20 14:51:12.271704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.388 [2024-11-20 14:51:12.276840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.388 [2024-11-20 14:51:12.276861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.388 [2024-11-20 14:51:12.276869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.388 [2024-11-20 14:51:12.282137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.388 [2024-11-20 14:51:12.282157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.388 [2024-11-20 14:51:12.282166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.388 [2024-11-20 14:51:12.287436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.388 [2024-11-20 14:51:12.287457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.388 [2024-11-20 14:51:12.287465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.388 [2024-11-20 14:51:12.292661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.388 [2024-11-20 14:51:12.292682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.388 [2024-11-20 14:51:12.292690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.388 [2024-11-20 14:51:12.297980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.388 [2024-11-20 14:51:12.298001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.388 [2024-11-20 14:51:12.298008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.388 [2024-11-20 14:51:12.303334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.388 [2024-11-20 14:51:12.303355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.388 [2024-11-20 14:51:12.303363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.388 [2024-11-20 14:51:12.308750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.388 [2024-11-20 14:51:12.308770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.388 [2024-11-20 14:51:12.308779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.388 [2024-11-20 14:51:12.314088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.388 [2024-11-20 14:51:12.314109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.388 [2024-11-20 14:51:12.314118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.388 [2024-11-20 14:51:12.319388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.388 [2024-11-20 14:51:12.319409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.388 [2024-11-20 14:51:12.319417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.388 [2024-11-20 14:51:12.324673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.388 [2024-11-20 14:51:12.324693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.388 [2024-11-20 14:51:12.324701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.388 [2024-11-20 14:51:12.329983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.388 [2024-11-20 14:51:12.330003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.388 [2024-11-20 14:51:12.330011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.388 [2024-11-20 14:51:12.335307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.388 [2024-11-20 14:51:12.335331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.388 [2024-11-20 14:51:12.335340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.388 [2024-11-20 14:51:12.340775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.388 [2024-11-20 14:51:12.340797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.388 [2024-11-20 14:51:12.340809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.647 [2024-11-20 14:51:12.346123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.346144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.346153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.648 [2024-11-20 14:51:12.351457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.351479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.351489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.648 [2024-11-20 14:51:12.356836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.356857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.356865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.648 [2024-11-20 14:51:12.362357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.362378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.362386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.648 [2024-11-20 14:51:12.368126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.368148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.368157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.648 [2024-11-20 14:51:12.373693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.373715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.373723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.648 [2024-11-20 14:51:12.379255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.379277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.379286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.648 [2024-11-20 14:51:12.384682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.384704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.384712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.648 [2024-11-20 14:51:12.390333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.390355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.390363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.648 [2024-11-20 14:51:12.395766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.395787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.395795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.648 [2024-11-20 14:51:12.401290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.401310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.401318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.648 [2024-11-20 14:51:12.406830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.406850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.406858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.648 [2024-11-20 14:51:12.412248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.412269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.412277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.648 [2024-11-20 14:51:12.417403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.417424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.417432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.648 [2024-11-20 14:51:12.422824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.422845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.422852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.648 [2024-11-20 14:51:12.428265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.428287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.428298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.648 [2024-11-20 14:51:12.433632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.433652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.433660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.648 [2024-11-20 14:51:12.439164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.439185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.439194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.648 [2024-11-20 14:51:12.444963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.444984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.444992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.648 [2024-11-20 14:51:12.450463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.450484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.450492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.648 5610.00 IOPS, 701.25 MiB/s [2024-11-20T13:51:12.606Z] [2024-11-20 14:51:12.457296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e214e0) 00:32:00.648 [2024-11-20 14:51:12.457318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.648 [2024-11-20 14:51:12.457326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.648 00:32:00.648 Latency(us) 00:32:00.648 [2024-11-20T13:51:12.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:00.648 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:00.648 nvme0n1 : 2.00 5608.58 701.07 0.00 0.00 2849.36 651.80 8605.16 00:32:00.648 [2024-11-20T13:51:12.607Z] =================================================================================================================== 00:32:00.649 [2024-11-20T13:51:12.607Z] Total : 5608.58 701.07 0.00 0.00 2849.36 651.80 8605.16 00:32:00.649 { 00:32:00.649 "results": [ 00:32:00.649 { 00:32:00.649 "job": "nvme0n1", 00:32:00.649 "core_mask": "0x2", 00:32:00.649 "workload": "randread", 00:32:00.649 "status": "finished", 00:32:00.649 "queue_depth": 16, 00:32:00.649 "io_size": 131072, 00:32:00.649 "runtime": 2.003358, 00:32:00.649 "iops": 5608.583188825961, 00:32:00.649 "mibps": 701.0728986032451, 00:32:00.649 "io_failed": 0, 00:32:00.649 "io_timeout": 0, 00:32:00.649 "avg_latency_us": 2849.361189963936, 00:32:00.649 "min_latency_us": 651.7982608695652, 00:32:00.649 "max_latency_us": 8605.161739130435 00:32:00.649 } 00:32:00.649 ], 00:32:00.649 "core_count": 1 00:32:00.649 } 00:32:00.649 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:00.649 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:00.649 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:00.649 | .driver_specific 00:32:00.649 | .nvme_error 00:32:00.649 | .status_code 00:32:00.649 | .command_transient_transport_error' 00:32:00.649 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:00.908 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 363 > 0 )) 00:32:00.908 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1744059 00:32:00.908 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1744059 ']' 00:32:00.908 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1744059 00:32:00.908 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:00.909 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:00.909 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1744059 00:32:00.909 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:00.909 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:00.909 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1744059' 00:32:00.909 killing process with pid 1744059 00:32:00.909 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1744059 00:32:00.909 Received shutdown signal, test time was about 2.000000 seconds 00:32:00.909 00:32:00.909 Latency(us) 00:32:00.909 [2024-11-20T13:51:12.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:00.909 [2024-11-20T13:51:12.867Z] =================================================================================================================== 00:32:00.909 [2024-11-20T13:51:12.867Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:00.909 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1744059 00:32:01.168 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:01.168 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:01.168 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:01.168 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:01.168 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:01.168 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1744616 00:32:01.168 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1744616 /var/tmp/bperf.sock 00:32:01.168 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:01.168 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1744616 ']' 00:32:01.168 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:01.168 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:01.168 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:01.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:01.168 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:01.168 14:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:01.168 [2024-11-20 14:51:12.952370] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:32:01.168 [2024-11-20 14:51:12.952425] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744616 ] 00:32:01.168 [2024-11-20 14:51:13.030451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.168 [2024-11-20 14:51:13.068873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.426 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:01.426 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:32:01.426 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:01.426 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:01.426 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:01.426 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.426 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:01.426 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.426 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:01.426 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:01.994 nvme0n1 00:32:01.994 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:01.994 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.994 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:01.994 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.994 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:01.994 14:51:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:01.994 Running I/O for 2 seconds... 00:32:01.994 [2024-11-20 14:51:13.906218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef6cc8 00:32:01.994 [2024-11-20 14:51:13.907233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.994 [2024-11-20 14:51:13.907262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:01.994 [2024-11-20 14:51:13.915923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee9e10 00:32:01.994 [2024-11-20 14:51:13.916415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.994 [2024-11-20 14:51:13.916439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:01.994 [2024-11-20 14:51:13.925788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efe2e8 00:32:01.994 [2024-11-20 14:51:13.926399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.994 [2024-11-20 14:51:13.926422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:01.994 [2024-11-20 14:51:13.934858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eeea00 00:32:01.994 [2024-11-20 14:51:13.935795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.994 [2024-11-20 14:51:13.935816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:01.994 [2024-11-20 14:51:13.944249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efeb58 00:32:01.994 [2024-11-20 14:51:13.945140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:01.994 [2024-11-20 14:51:13.945159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:02.253 [2024-11-20 14:51:13.953359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee0630 00:32:02.253 [2024-11-20 14:51:13.954114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.253 [2024-11-20 14:51:13.954136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:02.253 [2024-11-20 14:51:13.965476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef2d80 00:32:02.253 [2024-11-20 14:51:13.967042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.253 [2024-11-20 14:51:13.967062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:02.253 [2024-11-20 14:51:13.972046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efd208 00:32:02.253 [2024-11-20 14:51:13.972781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.253 [2024-11-20 14:51:13.972800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:02.253 [2024-11-20 14:51:13.981737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee95a0 00:32:02.253 [2024-11-20 14:51:13.982573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.253 [2024-11-20 14:51:13.982592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:02.253 [2024-11-20 14:51:13.991440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016edece0 00:32:02.253 [2024-11-20 14:51:13.992457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.253 [2024-11-20 14:51:13.992477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:02.253 [2024-11-20 14:51:14.000227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef3e60 00:32:02.253 [2024-11-20 14:51:14.001176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.253 [2024-11-20 14:51:14.001195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:02.253 [2024-11-20 14:51:14.009906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee23b8 00:32:02.253 [2024-11-20 14:51:14.011044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.253 [2024-11-20 14:51:14.011063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:02.253 [2024-11-20 14:51:14.019353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efc560 00:32:02.253 [2024-11-20 14:51:14.019979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.253 [2024-11-20 14:51:14.019998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:02.253 [2024-11-20 14:51:14.028691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee0a68 00:32:02.253 [2024-11-20 14:51:14.029602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.253 [2024-11-20 14:51:14.029621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:02.253 [2024-11-20 14:51:14.038869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efd640 00:32:02.253 [2024-11-20 14:51:14.040216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.253 [2024-11-20 14:51:14.040235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:02.253 [2024-11-20 14:51:14.047692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eea680 00:32:02.253 [2024-11-20 14:51:14.048970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.253 [2024-11-20 14:51:14.049005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:02.253 [2024-11-20 14:51:14.056837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efdeb0 00:32:02.253 [2024-11-20 14:51:14.058109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.253 [2024-11-20 14:51:14.058128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:02.253 [2024-11-20 14:51:14.065582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eeaef0 00:32:02.253 [2024-11-20 14:51:14.066543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.253 [2024-11-20 14:51:14.066563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:02.253 [2024-11-20 14:51:14.074873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efb8b8 00:32:02.253 [2024-11-20 14:51:14.075708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.253 [2024-11-20 14:51:14.075728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:02.253 [2024-11-20 14:51:14.084732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eebb98 00:32:02.253 [2024-11-20 14:51:14.085892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.253 [2024-11-20 14:51:14.085912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:02.253 [2024-11-20 14:51:14.096136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee5658 00:32:02.253 [2024-11-20 14:51:14.097776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.253 [2024-11-20 14:51:14.097795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:02.254 [2024-11-20 14:51:14.102691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef5be8 00:32:02.254 [2024-11-20 14:51:14.103442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.254 [2024-11-20 14:51:14.103461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:02.254 [2024-11-20 14:51:14.113238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee8088 00:32:02.254 [2024-11-20 14:51:14.114530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.254 [2024-11-20 14:51:14.114550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:02.254 [2024-11-20 14:51:14.121967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef31b8 00:32:02.254 [2024-11-20 14:51:14.122984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.254 [2024-11-20 14:51:14.123004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:02.254 [2024-11-20 14:51:14.131244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef9b30 00:32:02.254 [2024-11-20 14:51:14.132101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.254 [2024-11-20 14:51:14.132119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:02.254 [2024-11-20 14:51:14.140903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef20d8 00:32:02.254 [2024-11-20 14:51:14.141949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.254 [2024-11-20 14:51:14.141968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:02.254 [2024-11-20 14:51:14.149761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016edf118 00:32:02.254 [2024-11-20 14:51:14.150658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.254 [2024-11-20 14:51:14.150677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:02.254 [2024-11-20 14:51:14.159416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee8088 00:32:02.254 [2024-11-20 14:51:14.160436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.254 [2024-11-20 14:51:14.160456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:02.254 [2024-11-20 14:51:14.168417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016edfdc0 00:32:02.254 [2024-11-20 14:51:14.169429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.254 [2024-11-20 14:51:14.169451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:02.254 [2024-11-20 14:51:14.177942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efda78 00:32:02.254 [2024-11-20 14:51:14.178980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.254 [2024-11-20 14:51:14.179000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:02.254 [2024-11-20 14:51:14.187144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee0ea0 00:32:02.254 [2024-11-20 14:51:14.188127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.254 [2024-11-20 14:51:14.188146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:02.254 [2024-11-20 14:51:14.196792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eeaef0 00:32:02.254 [2024-11-20 14:51:14.197964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.254 [2024-11-20 14:51:14.197983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:02.254 [2024-11-20 14:51:14.205701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee0630 00:32:02.254 [2024-11-20 14:51:14.206579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.254 [2024-11-20 14:51:14.206599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:02.513 [2024-11-20 14:51:14.215451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee6300 00:32:02.513 [2024-11-20 14:51:14.216395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.513 [2024-11-20 14:51:14.216415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:02.513 [2024-11-20 14:51:14.225147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eff3c8 00:32:02.513 [2024-11-20 14:51:14.226165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.513 [2024-11-20 14:51:14.226185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:02.513 [2024-11-20 14:51:14.234557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef5be8 00:32:02.513 [2024-11-20 14:51:14.235595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.513 [2024-11-20 14:51:14.235615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:02.513 [2024-11-20 14:51:14.245978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee8d30 00:32:02.513 [2024-11-20 14:51:14.247517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.513 [2024-11-20 14:51:14.247535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:02.513 [2024-11-20 14:51:14.252606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef7100 00:32:02.513 [2024-11-20 14:51:14.253262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.513 [2024-11-20 14:51:14.253281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:02.513 [2024-11-20 14:51:14.263263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eeff18 00:32:02.513 [2024-11-20 14:51:14.264343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.513 [2024-11-20 14:51:14.264362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:02.513 [2024-11-20 14:51:14.271879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef7538 00:32:02.513 [2024-11-20 14:51:14.272914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.513 [2024-11-20 14:51:14.272932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:02.513 [2024-11-20 14:51:14.281534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee7c50 00:32:02.513 [2024-11-20 14:51:14.282574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.282592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.292522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee7c50 00:32:02.514 [2024-11-20 14:51:14.294084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.294102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.300575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eebfd0 00:32:02.514 [2024-11-20 14:51:14.301648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.301666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.310347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee49b0 00:32:02.514 [2024-11-20 14:51:14.311752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.311770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.320081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef0ff8 00:32:02.514 [2024-11-20 14:51:14.321618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.321636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.329463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee1710 00:32:02.514 [2024-11-20 14:51:14.330962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.330979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.335929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef0bc0 00:32:02.514 [2024-11-20 14:51:14.336726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.336745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.347330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee4de8 00:32:02.514 [2024-11-20 14:51:14.348625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.348644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.356160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efeb58 00:32:02.514 [2024-11-20 14:51:14.357186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.357215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.365452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef0bc0 00:32:02.514 [2024-11-20 14:51:14.366369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.366388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.374213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee6738 00:32:02.514 [2024-11-20 14:51:14.375115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.375134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.383843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee5658 00:32:02.514 [2024-11-20 14:51:14.384911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.384930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.395343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef7970 00:32:02.514 [2024-11-20 14:51:14.396868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.396887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.401916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eeaef0 00:32:02.514 [2024-11-20 14:51:14.402614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.402633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.411675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef57b0 00:32:02.514 [2024-11-20 14:51:14.412503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.412529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.421495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eedd58 00:32:02.514 [2024-11-20 14:51:14.422112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.422131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.430423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee49b0 00:32:02.514 [2024-11-20 14:51:14.430939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.430965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.440271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee4578 00:32:02.514 [2024-11-20 14:51:14.441217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.441236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.449850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef7da8 00:32:02.514 [2024-11-20 14:51:14.450829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.450849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.459422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efb048 00:32:02.514 [2024-11-20 14:51:14.460047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.514 [2024-11-20 14:51:14.460067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:02.514 [2024-11-20 14:51:14.468581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eec408 00:32:02.773 [2024-11-20 14:51:14.469528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.773 [2024-11-20 14:51:14.469553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:02.773 [2024-11-20 14:51:14.478798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee4de8 00:32:02.773 [2024-11-20 14:51:14.479935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.773 [2024-11-20 14:51:14.479960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:02.773 [2024-11-20 14:51:14.488541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016edf118 00:32:02.773 [2024-11-20 14:51:14.489906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.773 [2024-11-20 14:51:14.489926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:02.773 [2024-11-20 14:51:14.497091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee1f80 00:32:02.773 [2024-11-20 14:51:14.498454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.773 [2024-11-20 14:51:14.498473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:02.773 [2024-11-20 14:51:14.506571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eeff18 00:32:02.773 [2024-11-20 14:51:14.507510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.773 [2024-11-20 14:51:14.507530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:02.773 [2024-11-20 14:51:14.516616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee2c28 00:32:02.773 [2024-11-20 14:51:14.517826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.773 [2024-11-20 14:51:14.517844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:02.773 [2024-11-20 14:51:14.525840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efc998 00:32:02.774 [2024-11-20 14:51:14.527163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.527182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.535586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef3e60 00:32:02.774 [2024-11-20 14:51:14.536973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.536993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.545293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee27f0 00:32:02.774 [2024-11-20 14:51:14.546896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.546915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.552004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efdeb0 00:32:02.774 [2024-11-20 14:51:14.552706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.552724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.563960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eed0b0 00:32:02.774 [2024-11-20 14:51:14.565443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.565461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.570543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eea680 00:32:02.774 [2024-11-20 14:51:14.571131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.571150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.580203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef8618 00:32:02.774 [2024-11-20 14:51:14.580912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.580932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.589404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef8a50 00:32:02.774 [2024-11-20 14:51:14.590230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.590249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.598809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efcdd0 00:32:02.774 [2024-11-20 14:51:14.599780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.599799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.608367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eeea00 00:32:02.774 [2024-11-20 14:51:14.608965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.608984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.617160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef6890 00:32:02.774 [2024-11-20 14:51:14.617672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.617691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.626143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef20d8 00:32:02.774 [2024-11-20 14:51:14.626826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.626844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.635701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eeea00 00:32:02.774 [2024-11-20 14:51:14.636415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.636433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.645531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef6458 00:32:02.774 [2024-11-20 14:51:14.646520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.646539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.657119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eecc78 00:32:02.774 [2024-11-20 14:51:14.658505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.658537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.663866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efc560 00:32:02.774 [2024-11-20 14:51:14.664594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.664613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.673797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee4140 00:32:02.774 [2024-11-20 14:51:14.674666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.674684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.685476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eebb98 00:32:02.774 [2024-11-20 14:51:14.686891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.686909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.694413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef7100 00:32:02.774 [2024-11-20 14:51:14.695508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.695528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.703762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef4f40 00:32:02.774 [2024-11-20 14:51:14.704833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.704852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.713528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efac10 00:32:02.774 [2024-11-20 14:51:14.714723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.714742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:02.774 [2024-11-20 14:51:14.722051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eec840 00:32:02.774 [2024-11-20 14:51:14.723070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:02.774 [2024-11-20 14:51:14.723089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:03.033 [2024-11-20 14:51:14.731097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef7970 00:32:03.033 [2024-11-20 14:51:14.731805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.033 [2024-11-20 14:51:14.731825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:03.033 [2024-11-20 14:51:14.740026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef4f40 00:32:03.033 [2024-11-20 14:51:14.740704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.033 [2024-11-20 14:51:14.740724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:03.033 [2024-11-20 14:51:14.751302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee99d8 00:32:03.033 [2024-11-20 14:51:14.752439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.033 [2024-11-20 14:51:14.752459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:03.033 [2024-11-20 14:51:14.758906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee4140 00:32:03.033 [2024-11-20 14:51:14.759588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.033 [2024-11-20 14:51:14.759607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:03.033 [2024-11-20 14:51:14.770509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee8d30 00:32:03.033 [2024-11-20 14:51:14.771959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.033 [2024-11-20 14:51:14.771977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:03.033 [2024-11-20 14:51:14.777240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eed4e8 00:32:03.033 [2024-11-20 14:51:14.777801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.033 [2024-11-20 14:51:14.777820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:03.033 [2024-11-20 14:51:14.786432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eeb328 00:32:03.034 [2024-11-20 14:51:14.787002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.787022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.797436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eeb328 00:32:03.034 [2024-11-20 14:51:14.798508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.798528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.807179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef5378 00:32:03.034 [2024-11-20 14:51:14.808496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.808516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.815873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efb8b8 00:32:03.034 [2024-11-20 14:51:14.816791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.816810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.826199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee3498 00:32:03.034 [2024-11-20 14:51:14.827624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.827643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.832981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016edf988 00:32:03.034 [2024-11-20 14:51:14.833670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.833688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.844419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eeb328 00:32:03.034 [2024-11-20 14:51:14.845448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.845466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.853290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eecc78 00:32:03.034 [2024-11-20 14:51:14.854287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.854304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.862957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee73e0 00:32:03.034 [2024-11-20 14:51:14.864122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.864141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.872661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee27f0 00:32:03.034 [2024-11-20 14:51:14.873907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.873925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.882336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efc998 00:32:03.034 [2024-11-20 14:51:14.883759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.883777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.889155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efef90 00:32:03.034 [2024-11-20 14:51:14.889798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.889816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:03.034 26956.00 IOPS, 105.30 MiB/s [2024-11-20T13:51:14.992Z] [2024-11-20 14:51:14.900005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee73e0 00:32:03.034 [2024-11-20 14:51:14.900790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.900812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.909668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efdeb0 00:32:03.034 [2024-11-20 14:51:14.910586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.910606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.920109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee5220 00:32:03.034 [2024-11-20 14:51:14.921175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.921195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.929694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef8e88 00:32:03.034 [2024-11-20 14:51:14.930778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.930796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.939096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef92c0 00:32:03.034 [2024-11-20 14:51:14.940158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.940177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.948573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef6020 00:32:03.034 [2024-11-20 14:51:14.949645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.949663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.958285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef5378 00:32:03.034 [2024-11-20 14:51:14.959456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.959475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.967187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef31b8 00:32:03.034 [2024-11-20 14:51:14.968360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.968379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.975798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef81e0 00:32:03.034 [2024-11-20 14:51:14.976601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.976619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:03.034 [2024-11-20 14:51:14.985203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efac10 00:32:03.034 [2024-11-20 14:51:14.985812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.034 [2024-11-20 14:51:14.985832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:14.996022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016edfdc0 00:32:03.294 [2024-11-20 14:51:14.997425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:14.997446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.005773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee5220 00:32:03.294 [2024-11-20 14:51:15.007305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.007324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.012392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efb8b8 00:32:03.294 [2024-11-20 14:51:15.012977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.012996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.022043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee4578 00:32:03.294 [2024-11-20 14:51:15.022818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.022837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.031311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee4140 00:32:03.294 [2024-11-20 14:51:15.032233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.032252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.041017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efe2e8 00:32:03.294 [2024-11-20 14:51:15.042060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.042079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.050738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efd640 00:32:03.294 [2024-11-20 14:51:15.051919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.051937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.060474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee3d08 00:32:03.294 [2024-11-20 14:51:15.061771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.061790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.070154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee6fa8 00:32:03.294 [2024-11-20 14:51:15.071532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.071551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.078790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef46d0 00:32:03.294 [2024-11-20 14:51:15.079831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.079851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.087975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016edfdc0 00:32:03.294 [2024-11-20 14:51:15.089065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.089085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.097506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee3498 00:32:03.294 [2024-11-20 14:51:15.098632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.098668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.106340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016edf988 00:32:03.294 [2024-11-20 14:51:15.107495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.107514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.115741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef3e60 00:32:03.294 [2024-11-20 14:51:15.116484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.116503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.126433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efeb58 00:32:03.294 [2024-11-20 14:51:15.127932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.127953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.133005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee4578 00:32:03.294 [2024-11-20 14:51:15.133700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.133718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.142230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee4140 00:32:03.294 [2024-11-20 14:51:15.142996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.143017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.152592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef3a28 00:32:03.294 [2024-11-20 14:51:15.153470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.153488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.162146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee99d8 00:32:03.294 [2024-11-20 14:51:15.163157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.163176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.171513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef1ca0 00:32:03.294 [2024-11-20 14:51:15.172608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.172627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.181020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efb8b8 00:32:03.294 [2024-11-20 14:51:15.182099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.182118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.190462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef7538 00:32:03.294 [2024-11-20 14:51:15.191559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.191578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.199919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efa3a0 00:32:03.294 [2024-11-20 14:51:15.200969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.200988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.209269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef9f68 00:32:03.294 [2024-11-20 14:51:15.210373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.210392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.218567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eea680 00:32:03.294 [2024-11-20 14:51:15.219600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.294 [2024-11-20 14:51:15.219619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:03.294 [2024-11-20 14:51:15.227855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee3d08 00:32:03.294 [2024-11-20 14:51:15.228937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.295 [2024-11-20 14:51:15.228960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:03.295 [2024-11-20 14:51:15.237136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef0ff8 00:32:03.295 [2024-11-20 14:51:15.238221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.295 [2024-11-20 14:51:15.238240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:03.295 [2024-11-20 14:51:15.246535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee23b8 00:32:03.295 [2024-11-20 14:51:15.247621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.295 [2024-11-20 14:51:15.247641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.255235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eff3c8 00:32:03.554 [2024-11-20 14:51:15.256481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.256502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.263823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efe720 00:32:03.554 [2024-11-20 14:51:15.264517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.264537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.273077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016edece0 00:32:03.554 [2024-11-20 14:51:15.273764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.273784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.283486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eeea00 00:32:03.554 [2024-11-20 14:51:15.284569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.284588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.293184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef0ff8 00:32:03.554 [2024-11-20 14:51:15.294355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.294374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.301810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee99d8 00:32:03.554 [2024-11-20 14:51:15.302704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.302723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.311220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef2510 00:32:03.554 [2024-11-20 14:51:15.311946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.311970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.320027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef8a50 00:32:03.554 [2024-11-20 14:51:15.321337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.321355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.328621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef2510 00:32:03.554 [2024-11-20 14:51:15.329305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.329323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.337884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efdeb0 00:32:03.554 [2024-11-20 14:51:15.338540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.338559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.347124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee8088 00:32:03.554 [2024-11-20 14:51:15.347801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.347820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.356490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef5378 00:32:03.554 [2024-11-20 14:51:15.357215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.357234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.365758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef96f8 00:32:03.554 [2024-11-20 14:51:15.366414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.366432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.375005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee3d08 00:32:03.554 [2024-11-20 14:51:15.375662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.375680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.384586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef46d0 00:32:03.554 [2024-11-20 14:51:15.385388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.385408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.394310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef81e0 00:32:03.554 [2024-11-20 14:51:15.395377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.395396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.402942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef0350 00:32:03.554 [2024-11-20 14:51:15.403619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.403638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.412361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee01f8 00:32:03.554 [2024-11-20 14:51:15.412810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.412830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.421810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eeea00 00:32:03.554 [2024-11-20 14:51:15.422662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.422681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.431527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efcdd0 00:32:03.554 [2024-11-20 14:51:15.432140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.432158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.441393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ede470 00:32:03.554 [2024-11-20 14:51:15.442120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.554 [2024-11-20 14:51:15.442139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:03.554 [2024-11-20 14:51:15.450175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee4578 00:32:03.555 [2024-11-20 14:51:15.451644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.555 [2024-11-20 14:51:15.451663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:03.555 [2024-11-20 14:51:15.459008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee3498 00:32:03.555 [2024-11-20 14:51:15.459651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.555 [2024-11-20 14:51:15.459670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:03.555 [2024-11-20 14:51:15.468260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef2510 00:32:03.555 [2024-11-20 14:51:15.468945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.555 [2024-11-20 14:51:15.468971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:03.555 [2024-11-20 14:51:15.477844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee01f8 00:32:03.555 [2024-11-20 14:51:15.478669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.555 [2024-11-20 14:51:15.478689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:03.555 [2024-11-20 14:51:15.486686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee1710 00:32:03.555 [2024-11-20 14:51:15.487324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.555 [2024-11-20 14:51:15.487342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:03.555 [2024-11-20 14:51:15.497544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee8088 00:32:03.555 [2024-11-20 14:51:15.498648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.555 [2024-11-20 14:51:15.498666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:03.555 [2024-11-20 14:51:15.506806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef6890 00:32:03.555 [2024-11-20 14:51:15.507927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.555 [2024-11-20 14:51:15.507952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:03.814 [2024-11-20 14:51:15.516327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eed920 00:32:03.814 [2024-11-20 14:51:15.517390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.814 [2024-11-20 14:51:15.517411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:03.814 [2024-11-20 14:51:15.525611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee6300 00:32:03.814 [2024-11-20 14:51:15.526696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.814 [2024-11-20 14:51:15.526715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:03.814 [2024-11-20 14:51:15.536092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee49b0 00:32:03.814 [2024-11-20 14:51:15.537647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.814 [2024-11-20 14:51:15.537666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:03.814 [2024-11-20 14:51:15.542727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eeee38 00:32:03.814 [2024-11-20 14:51:15.543347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.814 [2024-11-20 14:51:15.543366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:03.814 [2024-11-20 14:51:15.552132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efa7d8 00:32:03.814 [2024-11-20 14:51:15.552721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.814 [2024-11-20 14:51:15.552740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:03.814 [2024-11-20 14:51:15.563797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee9168 00:32:03.814 [2024-11-20 14:51:15.565236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.814 [2024-11-20 14:51:15.565255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:03.814 [2024-11-20 14:51:15.570642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee5220 00:32:03.814 [2024-11-20 14:51:15.571325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.814 [2024-11-20 14:51:15.571343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:03.814 [2024-11-20 14:51:15.581834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef3e60 00:32:03.814 [2024-11-20 14:51:15.582896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.814 [2024-11-20 14:51:15.582915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:03.814 [2024-11-20 14:51:15.591673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee3060 00:32:03.814 [2024-11-20 14:51:15.592998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.814 [2024-11-20 14:51:15.593017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:03.814 [2024-11-20 14:51:15.600500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efb8b8 00:32:03.814 [2024-11-20 14:51:15.601562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.814 [2024-11-20 14:51:15.601581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:03.814 [2024-11-20 14:51:15.609872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef5378 00:32:03.814 [2024-11-20 14:51:15.610856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.814 [2024-11-20 14:51:15.610875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:03.814 [2024-11-20 14:51:15.618680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee2c28 00:32:03.814 [2024-11-20 14:51:15.619635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.814 [2024-11-20 14:51:15.619654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:03.814 [2024-11-20 14:51:15.628366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efcdd0 00:32:03.814 [2024-11-20 14:51:15.629455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.814 [2024-11-20 14:51:15.629474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:03.814 [2024-11-20 14:51:15.639402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eed0b0 00:32:03.814 [2024-11-20 14:51:15.640975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.814 [2024-11-20 14:51:15.640993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:03.815 [2024-11-20 14:51:15.646233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eec840 00:32:03.815 [2024-11-20 14:51:15.647097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.815 [2024-11-20 14:51:15.647115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:03.815 [2024-11-20 14:51:15.657631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee4578 00:32:03.815 [2024-11-20 14:51:15.658863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.815 [2024-11-20 14:51:15.658881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:03.815 [2024-11-20 14:51:15.667450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efc128 00:32:03.815 [2024-11-20 14:51:15.668921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.815 [2024-11-20 14:51:15.668939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:03.815 [2024-11-20 14:51:15.674279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efe2e8 00:32:03.815 [2024-11-20 14:51:15.675041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.815 [2024-11-20 14:51:15.675060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.815 [2024-11-20 14:51:15.684208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee8d30 00:32:03.815 [2024-11-20 14:51:15.685075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.815 [2024-11-20 14:51:15.685095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:03.815 [2024-11-20 14:51:15.693752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eed0b0 00:32:03.815 [2024-11-20 14:51:15.694653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.815 [2024-11-20 14:51:15.694672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:03.815 [2024-11-20 14:51:15.703408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee5220 00:32:03.815 [2024-11-20 14:51:15.703922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.815 [2024-11-20 14:51:15.703942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:03.815 [2024-11-20 14:51:15.713138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee0a68 00:32:03.815 [2024-11-20 14:51:15.713807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.815 [2024-11-20 14:51:15.713830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:03.815 [2024-11-20 14:51:15.722828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee6fa8 00:32:03.815 [2024-11-20 14:51:15.723595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.815 [2024-11-20 14:51:15.723614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:03.815 [2024-11-20 14:51:15.731578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eff3c8 00:32:03.815 [2024-11-20 14:51:15.732982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.815 [2024-11-20 14:51:15.732999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:03.815 [2024-11-20 14:51:15.739577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef8a50 00:32:03.815 [2024-11-20 14:51:15.740326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.815 [2024-11-20 14:51:15.740345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:03.815 [2024-11-20 14:51:15.749285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee5a90 00:32:03.815 [2024-11-20 14:51:15.750139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.815 [2024-11-20 14:51:15.750158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:03.815 [2024-11-20 14:51:15.759159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee0a68 00:32:03.815 [2024-11-20 14:51:15.760147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.815 [2024-11-20 14:51:15.760166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:03.815 [2024-11-20 14:51:15.768928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efd640 00:32:03.815 [2024-11-20 14:51:15.770079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:03.815 [2024-11-20 14:51:15.770099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:04.074 [2024-11-20 14:51:15.778702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee8d30 00:32:04.074 [2024-11-20 14:51:15.779920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.074 [2024-11-20 14:51:15.779940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:04.074 [2024-11-20 14:51:15.788417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee5a90 00:32:04.074 [2024-11-20 14:51:15.789836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.074 [2024-11-20 14:51:15.789855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:04.074 [2024-11-20 14:51:15.796907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016eef270 00:32:04.074 [2024-11-20 14:51:15.798293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.074 [2024-11-20 14:51:15.798312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:04.074 [2024-11-20 14:51:15.804962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efd640 00:32:04.074 [2024-11-20 14:51:15.805723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.074 [2024-11-20 14:51:15.805741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:04.074 [2024-11-20 14:51:15.814755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee5ec8 00:32:04.074 [2024-11-20 14:51:15.815660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.074 [2024-11-20 14:51:15.815679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:04.074 [2024-11-20 14:51:15.824159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efdeb0 00:32:04.074 [2024-11-20 14:51:15.825029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.074 [2024-11-20 14:51:15.825048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:04.074 [2024-11-20 14:51:15.833207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efc998 00:32:04.074 [2024-11-20 14:51:15.834095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.074 [2024-11-20 14:51:15.834114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:04.074 [2024-11-20 14:51:15.842961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016efeb58 00:32:04.074 [2024-11-20 14:51:15.844155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.074 [2024-11-20 14:51:15.844176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:04.074 [2024-11-20 14:51:15.854697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef4b08 00:32:04.074 [2024-11-20 14:51:15.856266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.074 [2024-11-20 14:51:15.856286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:04.074 [2024-11-20 14:51:15.861470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef4b08 00:32:04.074 [2024-11-20 14:51:15.862231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.074 [2024-11-20 14:51:15.862249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:04.074 [2024-11-20 14:51:15.871431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ef20d8 00:32:04.075 [2024-11-20 14:51:15.872348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.075 [2024-11-20 14:51:15.872367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:04.075 [2024-11-20 14:51:15.880763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ede470 00:32:04.075 [2024-11-20 14:51:15.881329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.075 [2024-11-20 14:51:15.881347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:04.075 [2024-11-20 14:51:15.892114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f180) with pdu=0x200016ee4de8 00:32:04.075 [2024-11-20 14:51:15.894592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:04.075 [2024-11-20 14:51:15.894611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:04.075 27135.50 IOPS, 106.00 MiB/s 00:32:04.075 Latency(us) 00:32:04.075 [2024-11-20T13:51:16.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:04.075 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:04.075 nvme0n1 : 2.00 27167.18 106.12 0.00 0.00 4706.78 1837.86 15158.76 00:32:04.075 [2024-11-20T13:51:16.033Z] =================================================================================================================== 00:32:04.075 [2024-11-20T13:51:16.033Z] Total : 27167.18 106.12 0.00 0.00 4706.78 1837.86 15158.76 00:32:04.075 { 00:32:04.075 "results": [ 00:32:04.075 { 00:32:04.075 "job": "nvme0n1", 00:32:04.075 "core_mask": "0x2", 00:32:04.075 "workload": "randwrite", 00:32:04.075 "status": "finished", 00:32:04.075 "queue_depth": 128, 00:32:04.075 "io_size": 4096, 00:32:04.075 "runtime": 2.002379, 00:32:04.075 "iops": 27167.184633878, 00:32:04.075 "mibps": 106.12181497608594, 00:32:04.075 "io_failed": 0, 00:32:04.075 "io_timeout": 0, 00:32:04.075 "avg_latency_us": 4706.782051140645, 00:32:04.075 "min_latency_us": 1837.8573913043479, 00:32:04.075 "max_latency_us": 15158.761739130436 00:32:04.075 } 00:32:04.075 ], 00:32:04.075 "core_count": 1 00:32:04.075 } 00:32:04.075 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:04.075 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:04.075 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:04.075 | .driver_specific 00:32:04.075 | .nvme_error 00:32:04.075 | .status_code 00:32:04.075 | .command_transient_transport_error' 00:32:04.075 14:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:04.334 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 213 > 0 )) 00:32:04.334 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1744616 00:32:04.334 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1744616 ']' 00:32:04.334 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1744616 00:32:04.334 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:04.334 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:04.334 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1744616 00:32:04.334 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:04.334 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:04.334 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1744616' 00:32:04.334 killing process with pid 1744616 00:32:04.334 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1744616 00:32:04.334 Received shutdown signal, test time was about 2.000000 seconds 00:32:04.334 00:32:04.334 Latency(us) 00:32:04.334 [2024-11-20T13:51:16.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:04.334 [2024-11-20T13:51:16.292Z] =================================================================================================================== 00:32:04.334 [2024-11-20T13:51:16.292Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:04.334 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1744616 00:32:04.593 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:04.593 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:04.593 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:04.593 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:04.594 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:04.594 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1745292 00:32:04.594 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1745292 /var/tmp/bperf.sock 00:32:04.594 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:04.594 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1745292 ']' 00:32:04.594 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:04.594 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:04.594 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:04.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:04.594 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:04.594 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:04.594 [2024-11-20 14:51:16.378013] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:32:04.594 [2024-11-20 14:51:16.378062] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745292 ] 00:32:04.594 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:04.594 Zero copy mechanism will not be used. 00:32:04.594 [2024-11-20 14:51:16.451982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.594 [2024-11-20 14:51:16.494373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.852 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:04.852 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:32:04.852 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:04.852 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:04.852 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:04.852 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.852 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:04.852 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.852 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:04.852 14:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:05.420 nvme0n1 00:32:05.420 14:51:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:05.420 14:51:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.420 14:51:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:05.420 14:51:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.420 14:51:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:05.420 14:51:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:05.420 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:05.420 Zero copy mechanism will not be used. 00:32:05.420 Running I/O for 2 seconds... 00:32:05.420 [2024-11-20 14:51:17.199017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.420 [2024-11-20 14:51:17.199115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.420 [2024-11-20 14:51:17.199143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.420 [2024-11-20 14:51:17.203713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.420 [2024-11-20 14:51:17.203777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.420 [2024-11-20 14:51:17.203799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.420 [2024-11-20 14:51:17.208109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.420 [2024-11-20 14:51:17.208181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.420 [2024-11-20 14:51:17.208201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.420 [2024-11-20 14:51:17.212615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.420 [2024-11-20 14:51:17.212718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.420 [2024-11-20 14:51:17.212739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.420 [2024-11-20 14:51:17.217314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.420 [2024-11-20 14:51:17.217417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.420 [2024-11-20 14:51:17.217437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.420 [2024-11-20 14:51:17.222063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.420 [2024-11-20 14:51:17.222155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.420 [2024-11-20 14:51:17.222175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.420 [2024-11-20 14:51:17.226411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.420 [2024-11-20 14:51:17.226490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.420 [2024-11-20 14:51:17.226510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.420 [2024-11-20 14:51:17.230718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.420 [2024-11-20 14:51:17.230792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.420 [2024-11-20 14:51:17.230812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.420 [2024-11-20 14:51:17.235160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.420 [2024-11-20 14:51:17.235231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.420 [2024-11-20 14:51:17.235250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.420 [2024-11-20 14:51:17.239376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.420 [2024-11-20 14:51:17.239445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.420 [2024-11-20 14:51:17.239465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.420 [2024-11-20 14:51:17.243611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.420 [2024-11-20 14:51:17.243666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.420 [2024-11-20 14:51:17.243685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.420 [2024-11-20 14:51:17.247839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.420 [2024-11-20 14:51:17.247907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.420 [2024-11-20 14:51:17.247927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.420 [2024-11-20 14:51:17.252117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.420 [2024-11-20 14:51:17.252195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.420 [2024-11-20 14:51:17.252214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.420 [2024-11-20 14:51:17.256410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.420 [2024-11-20 14:51:17.256476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.420 [2024-11-20 14:51:17.256495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.420 [2024-11-20 14:51:17.260791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.420 [2024-11-20 14:51:17.260862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.420 [2024-11-20 14:51:17.260883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.420 [2024-11-20 14:51:17.265193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.420 [2024-11-20 14:51:17.265262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.265281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.269441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.269513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.269532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.273652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.273722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.273741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.278423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.278582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.278601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.282746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.282821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.282840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.287001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.287078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.287097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.291387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.291455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.291473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.295692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.295815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.295838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.300410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.300579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.300598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.306372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.306525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.306545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.312031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.312133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.312151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.317797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.317977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.317996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.323785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.323872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.323891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.328850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.328979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.328998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.333379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.333480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.333499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.338242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.338309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.338329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.342936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.343073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.343093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.348862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.348922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.348941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.354153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.354234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.354253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.359309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.359383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.359403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.364214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.364299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.364318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.369089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.369169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.369188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.421 [2024-11-20 14:51:17.374093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.421 [2024-11-20 14:51:17.374227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.421 [2024-11-20 14:51:17.374248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.379128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.379218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.379239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.384022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.384075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.384095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.389038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.389102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.389122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.393728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.393795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.393814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.398356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.398436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.398456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.403160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.403226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.403246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.407860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.407922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.407941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.412586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.412650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.412669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.417407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.417479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.417498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.422225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.422300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.422319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.426794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.426847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.426870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.431383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.431453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.431472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.436125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.436208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.436227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.440650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.440711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.440729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.445203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.445256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.445275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.449731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.449805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.449824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.454326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.454387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.454407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.458957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.459017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.459036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.463608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.463668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.463687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.468237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.468295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.468313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.473065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.473212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.473232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.478895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.479038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.479058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.484970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.485093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.683 [2024-11-20 14:51:17.485113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.683 [2024-11-20 14:51:17.490854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.683 [2024-11-20 14:51:17.491021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.491041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.496148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.496237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.496256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.501293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.501371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.501391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.506757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.506866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.506885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.511351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.511430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.511450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.515936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.516019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.516038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.520611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.520708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.520727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.526015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.526180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.526199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.531669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.531980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.532001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.537920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.538188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.538208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.544160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.544462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.544482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.550288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.550520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.550540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.555737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.555997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.556017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.561414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.561665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.561689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.566519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.566763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.566783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.571537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.571801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.571822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.576787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.577016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.577037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.581798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.582040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.582060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.586642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.586860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.586880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.592322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.592543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.592564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.596876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.597139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.597159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.601409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.601663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.601683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.605770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.606029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.606054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.610320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.610573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.610593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.614865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.615109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.615129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.619415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.619644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.619664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.623783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.624028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.624049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.628000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.628219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.628239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.632258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.632501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.684 [2024-11-20 14:51:17.632523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.684 [2024-11-20 14:51:17.636605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.684 [2024-11-20 14:51:17.636866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.685 [2024-11-20 14:51:17.636888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.640913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.641143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.641166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.645233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.645480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.645503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.649409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.649675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.649697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.653720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.653960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.653981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.658203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.658442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.658463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.663113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.663350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.663371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.667922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.668185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.668206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.672351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.672599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.672620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.676768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.677033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.677054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.681227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.681461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.681485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.685410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.685631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.685651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.689825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.690066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.690086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.694678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.694930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.694957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.700161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.700411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.700432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.704914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.705153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.705174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.709392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.709627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.709647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.713966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.714216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.714236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.718533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.718788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.718808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.722878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.723133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.723157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.727369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.727612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.727632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.732592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.732818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.732839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.945 [2024-11-20 14:51:17.737873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.945 [2024-11-20 14:51:17.738132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.945 [2024-11-20 14:51:17.738153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.743507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.743753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.743774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.748726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.748940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.748967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.753813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.754045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.754065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.758355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.758583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.758604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.762814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.763072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.763092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.767236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.767464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.767484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.771702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.771960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.771982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.776218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.776446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.776467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.780782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.781032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.781053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.785553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.785783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.785804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.789927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.790181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.790201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.794533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.794775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.794795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.799648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.799887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.799907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.804763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.804991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.805015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.809799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.810052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.810072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.814337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.814579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.814599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.818832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.819089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.819109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.823297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.823521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.823540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.827774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.828016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.828036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.834011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.834329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.834349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.840044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.840283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.840303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.845928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.846193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.846214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.851220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.851446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.851470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.856483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.856733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.856753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.861660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.861886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.861906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.866465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.866710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.866730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.946 [2024-11-20 14:51:17.871207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.946 [2024-11-20 14:51:17.871439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.946 [2024-11-20 14:51:17.871459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.947 [2024-11-20 14:51:17.876168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.947 [2024-11-20 14:51:17.876393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.947 [2024-11-20 14:51:17.876414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:05.947 [2024-11-20 14:51:17.881594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.947 [2024-11-20 14:51:17.881812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.947 [2024-11-20 14:51:17.881832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:05.947 [2024-11-20 14:51:17.886809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.947 [2024-11-20 14:51:17.887052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.947 [2024-11-20 14:51:17.887072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:05.947 [2024-11-20 14:51:17.891733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.947 [2024-11-20 14:51:17.891939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.947 [2024-11-20 14:51:17.891965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:05.947 [2024-11-20 14:51:17.896501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:05.947 [2024-11-20 14:51:17.896752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.947 [2024-11-20 14:51:17.896774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.207 [2024-11-20 14:51:17.901679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.207 [2024-11-20 14:51:17.901930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.207 [2024-11-20 14:51:17.901958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.207 [2024-11-20 14:51:17.907023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.207 [2024-11-20 14:51:17.907253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.207 [2024-11-20 14:51:17.907274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.207 [2024-11-20 14:51:17.912492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.207 [2024-11-20 14:51:17.912712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.207 [2024-11-20 14:51:17.912733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.207 [2024-11-20 14:51:17.917453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.207 [2024-11-20 14:51:17.917681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.207 [2024-11-20 14:51:17.917702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.207 [2024-11-20 14:51:17.922371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.207 [2024-11-20 14:51:17.922608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.207 [2024-11-20 14:51:17.922628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.207 [2024-11-20 14:51:17.927614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.207 [2024-11-20 14:51:17.927839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.207 [2024-11-20 14:51:17.927859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.207 [2024-11-20 14:51:17.932912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.207 [2024-11-20 14:51:17.933149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.207 [2024-11-20 14:51:17.933169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.207 [2024-11-20 14:51:17.938145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.207 [2024-11-20 14:51:17.938392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.207 [2024-11-20 14:51:17.938416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.207 [2024-11-20 14:51:17.942941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.207 [2024-11-20 14:51:17.943210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.207 [2024-11-20 14:51:17.943230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.207 [2024-11-20 14:51:17.947902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.207 [2024-11-20 14:51:17.948134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.207 [2024-11-20 14:51:17.948154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.207 [2024-11-20 14:51:17.952906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.207 [2024-11-20 14:51:17.953151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.207 [2024-11-20 14:51:17.953170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.207 [2024-11-20 14:51:17.957743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.207 [2024-11-20 14:51:17.957963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.207 [2024-11-20 14:51:17.957984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.207 [2024-11-20 14:51:17.963899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.207 [2024-11-20 14:51:17.964018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.207 [2024-11-20 14:51:17.964038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.207 [2024-11-20 14:51:17.970496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.207 [2024-11-20 14:51:17.970808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.207 [2024-11-20 14:51:17.970829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.207 [2024-11-20 14:51:17.976743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.207 [2024-11-20 14:51:17.977007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.207 [2024-11-20 14:51:17.977028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.207 [2024-11-20 14:51:17.982436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.207 [2024-11-20 14:51:17.982666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.207 [2024-11-20 14:51:17.982686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.207 [2024-11-20 14:51:17.987686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.207 [2024-11-20 14:51:17.987937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.207 [2024-11-20 14:51:17.987967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.207 [2024-11-20 14:51:17.992316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:17.992548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:17.992567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:17.996844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:17.997078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:17.997097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.001393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:18.001621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:18.001641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.005780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:18.006040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:18.006060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.010314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:18.010547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:18.010566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.014778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:18.015039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:18.015059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.019764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:18.019990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:18.020010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.024873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:18.025102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:18.025123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.030023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:18.030270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:18.030290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.034916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:18.035165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:18.035185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.039607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:18.039834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:18.039854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.044140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:18.044364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:18.044384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.048546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:18.048780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:18.048800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.052943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:18.053195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:18.053216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.057506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:18.057757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:18.057777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.062092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:18.062328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:18.062348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.066596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:18.066822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:18.066841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.070978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:18.071207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:18.071227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.075426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:18.075653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:18.075672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.079888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:18.080134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:18.080154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.085447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.208 [2024-11-20 14:51:18.085691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.208 [2024-11-20 14:51:18.085711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.208 [2024-11-20 14:51:18.090575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.209 [2024-11-20 14:51:18.090796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.209 [2024-11-20 14:51:18.090816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.209 [2024-11-20 14:51:18.095062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.209 [2024-11-20 14:51:18.095283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.209 [2024-11-20 14:51:18.095303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.209 [2024-11-20 14:51:18.099614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.209 [2024-11-20 14:51:18.099841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.209 [2024-11-20 14:51:18.099860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.209 [2024-11-20 14:51:18.104096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.209 [2024-11-20 14:51:18.104312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.209 [2024-11-20 14:51:18.104332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.209 [2024-11-20 14:51:18.108480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.209 [2024-11-20 14:51:18.108710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.209 [2024-11-20 14:51:18.108733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.209 [2024-11-20 14:51:18.112904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.209 [2024-11-20 14:51:18.113157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.209 [2024-11-20 14:51:18.113177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.209 [2024-11-20 14:51:18.117467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.209 [2024-11-20 14:51:18.117694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.209 [2024-11-20 14:51:18.117714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.209 [2024-11-20 14:51:18.122436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.209 [2024-11-20 14:51:18.122655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.209 [2024-11-20 14:51:18.122674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.209 [2024-11-20 14:51:18.127635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.209 [2024-11-20 14:51:18.127855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.209 [2024-11-20 14:51:18.127875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.209 [2024-11-20 14:51:18.132710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.209 [2024-11-20 14:51:18.132942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.209 [2024-11-20 14:51:18.132968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.209 [2024-11-20 14:51:18.137885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.209 [2024-11-20 14:51:18.138116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.209 [2024-11-20 14:51:18.138136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.209 [2024-11-20 14:51:18.142859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.209 [2024-11-20 14:51:18.143092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.209 [2024-11-20 14:51:18.143112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.209 [2024-11-20 14:51:18.147379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.209 [2024-11-20 14:51:18.147594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.209 [2024-11-20 14:51:18.147614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.209 [2024-11-20 14:51:18.152149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.209 [2024-11-20 14:51:18.152381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.209 [2024-11-20 14:51:18.152401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.209 [2024-11-20 14:51:18.157114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.209 [2024-11-20 14:51:18.157344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.209 [2024-11-20 14:51:18.157364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.209 [2024-11-20 14:51:18.161914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.470 [2024-11-20 14:51:18.162151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.470 [2024-11-20 14:51:18.162174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.470 [2024-11-20 14:51:18.166498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.470 [2024-11-20 14:51:18.166747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.470 [2024-11-20 14:51:18.166769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.470 [2024-11-20 14:51:18.171351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.470 [2024-11-20 14:51:18.171581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.470 [2024-11-20 14:51:18.171602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.470 [2024-11-20 14:51:18.176267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.470 [2024-11-20 14:51:18.176493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.470 [2024-11-20 14:51:18.176514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.470 [2024-11-20 14:51:18.181681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.470 [2024-11-20 14:51:18.181899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.470 [2024-11-20 14:51:18.181919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.470 [2024-11-20 14:51:18.186586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.470 [2024-11-20 14:51:18.186835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.470 [2024-11-20 14:51:18.186855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.470 [2024-11-20 14:51:18.191309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.470 [2024-11-20 14:51:18.191536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.470 [2024-11-20 14:51:18.191556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.470 6351.00 IOPS, 793.88 MiB/s [2024-11-20T13:51:18.428Z] [2024-11-20 14:51:18.197515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.470 [2024-11-20 14:51:18.197574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.470 [2024-11-20 14:51:18.197593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.470 [2024-11-20 14:51:18.203033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.470 [2024-11-20 14:51:18.203091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.470 [2024-11-20 14:51:18.203110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.470 [2024-11-20 14:51:18.207891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.208012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.208033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.212594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.212650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.212669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.217322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.217394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.217414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.222122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.222194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.222214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.226968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.227043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.227062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.231728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.231798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.231817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.236479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.236547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.236565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.241271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.241350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.241369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.246085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.246211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.246229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.251332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.251434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.251453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.256823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.256881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.256900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.262490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.262543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.262562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.267877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.268007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.268026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.272762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.272888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.272907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.277674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.277757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.277777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.282319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.282380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.282399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.287030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.287093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.287112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.291694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.291755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.291773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.296456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.296512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.296531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.301254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.301308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.301327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.305913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.306002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.306021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.310572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.310628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.310647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.315187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.315251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.315269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.319812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.319871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.319892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.324412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.324479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.324497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.328982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.329054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.329073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.333577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.471 [2024-11-20 14:51:18.333663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.471 [2024-11-20 14:51:18.333681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.471 [2024-11-20 14:51:18.338255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.472 [2024-11-20 14:51:18.338306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.472 [2024-11-20 14:51:18.338325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.472 [2024-11-20 14:51:18.343000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.472 [2024-11-20 14:51:18.343058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.472 [2024-11-20 14:51:18.343076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.472 [2024-11-20 14:51:18.348256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.472 [2024-11-20 14:51:18.348339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.472 [2024-11-20 14:51:18.348359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.472 [2024-11-20 14:51:18.353474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.472 [2024-11-20 14:51:18.353541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.472 [2024-11-20 14:51:18.353561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.472 [2024-11-20 14:51:18.358565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.472 [2024-11-20 14:51:18.358627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.472 [2024-11-20 14:51:18.358646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.472 [2024-11-20 14:51:18.363413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.472 [2024-11-20 14:51:18.363479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.472 [2024-11-20 14:51:18.363499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.472 [2024-11-20 14:51:18.368194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.472 [2024-11-20 14:51:18.368264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.472 [2024-11-20 14:51:18.368284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.472 [2024-11-20 14:51:18.373018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.472 [2024-11-20 14:51:18.373113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.472 [2024-11-20 14:51:18.373132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.472 [2024-11-20 14:51:18.377680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.472 [2024-11-20 14:51:18.377744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.472 [2024-11-20 14:51:18.377763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.472 [2024-11-20 14:51:18.382481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.472 [2024-11-20 14:51:18.382539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.472 [2024-11-20 14:51:18.382559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.472 [2024-11-20 14:51:18.387232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.472 [2024-11-20 14:51:18.387364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.472 [2024-11-20 14:51:18.387383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.472 [2024-11-20 14:51:18.392744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.472 [2024-11-20 14:51:18.392809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.472 [2024-11-20 14:51:18.392830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.472 [2024-11-20 14:51:18.398434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.472 [2024-11-20 14:51:18.398494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.472 [2024-11-20 14:51:18.398513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.472 [2024-11-20 14:51:18.403254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.472 [2024-11-20 14:51:18.403324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.472 [2024-11-20 14:51:18.403343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.472 [2024-11-20 14:51:18.409333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.472 [2024-11-20 14:51:18.409465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.472 [2024-11-20 14:51:18.409484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.472 [2024-11-20 14:51:18.414852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.472 [2024-11-20 14:51:18.414911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.472 [2024-11-20 14:51:18.414930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.472 [2024-11-20 14:51:18.419696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.472 [2024-11-20 14:51:18.419768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.472 [2024-11-20 14:51:18.419788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.472 [2024-11-20 14:51:18.424524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.472 [2024-11-20 14:51:18.424584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.472 [2024-11-20 14:51:18.424604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.733 [2024-11-20 14:51:18.429220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.733 [2024-11-20 14:51:18.429280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.733 [2024-11-20 14:51:18.429301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.733 [2024-11-20 14:51:18.434049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.733 [2024-11-20 14:51:18.434116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.733 [2024-11-20 14:51:18.434136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.733 [2024-11-20 14:51:18.438905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.733 [2024-11-20 14:51:18.439050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.733 [2024-11-20 14:51:18.439070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.733 [2024-11-20 14:51:18.443839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.733 [2024-11-20 14:51:18.443907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.733 [2024-11-20 14:51:18.443927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.733 [2024-11-20 14:51:18.448702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.733 [2024-11-20 14:51:18.448772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.733 [2024-11-20 14:51:18.448795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.733 [2024-11-20 14:51:18.453572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.733 [2024-11-20 14:51:18.453638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.733 [2024-11-20 14:51:18.453657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.733 [2024-11-20 14:51:18.458411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.733 [2024-11-20 14:51:18.458479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.733 [2024-11-20 14:51:18.458498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.733 [2024-11-20 14:51:18.463194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.733 [2024-11-20 14:51:18.463315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.463334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.467979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.468096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.468115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.473765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.473845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.473864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.479921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.479994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.480013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.485172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.485254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.485274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.491956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.492092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.492111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.498807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.498930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.498955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.504508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.504587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.504606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.509857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.509935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.509961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.515165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.515219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.515239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.520000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.520082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.520102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.526009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.526168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.526188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.532092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.532170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.532189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.537476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.537556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.537576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.542541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.542618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.542638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.547750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.547830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.547850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.552725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.552861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.552880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.558565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.558741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.558760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.566034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.566091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.566111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.572622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.572701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.572720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.578285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.578376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.578395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.584450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.584604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.584624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.590887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.591051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.591070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.597116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.597265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.597288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.734 [2024-11-20 14:51:18.604761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.734 [2024-11-20 14:51:18.604824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.734 [2024-11-20 14:51:18.604844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.735 [2024-11-20 14:51:18.611087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.735 [2024-11-20 14:51:18.611189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.735 [2024-11-20 14:51:18.611209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.735 [2024-11-20 14:51:18.616025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.735 [2024-11-20 14:51:18.616147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.735 [2024-11-20 14:51:18.616166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.735 [2024-11-20 14:51:18.620859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.735 [2024-11-20 14:51:18.620931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.735 [2024-11-20 14:51:18.620955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.735 [2024-11-20 14:51:18.625538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.735 [2024-11-20 14:51:18.625615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.735 [2024-11-20 14:51:18.625635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.735 [2024-11-20 14:51:18.630280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.735 [2024-11-20 14:51:18.630352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.735 [2024-11-20 14:51:18.630372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.735 [2024-11-20 14:51:18.635008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.735 [2024-11-20 14:51:18.635080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.735 [2024-11-20 14:51:18.635099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.735 [2024-11-20 14:51:18.640369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.735 [2024-11-20 14:51:18.640426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.735 [2024-11-20 14:51:18.640445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.735 [2024-11-20 14:51:18.645606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.735 [2024-11-20 14:51:18.645667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.735 [2024-11-20 14:51:18.645685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.735 [2024-11-20 14:51:18.650854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.735 [2024-11-20 14:51:18.650931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.735 [2024-11-20 14:51:18.650955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.735 [2024-11-20 14:51:18.655976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.735 [2024-11-20 14:51:18.656031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.735 [2024-11-20 14:51:18.656049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.735 [2024-11-20 14:51:18.661895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.735 [2024-11-20 14:51:18.661968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.735 [2024-11-20 14:51:18.662002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.735 [2024-11-20 14:51:18.666651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.735 [2024-11-20 14:51:18.666734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.735 [2024-11-20 14:51:18.666754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.735 [2024-11-20 14:51:18.671681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.735 [2024-11-20 14:51:18.671737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.735 [2024-11-20 14:51:18.671756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.735 [2024-11-20 14:51:18.676939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.735 [2024-11-20 14:51:18.676999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.735 [2024-11-20 14:51:18.677018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.735 [2024-11-20 14:51:18.681868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.735 [2024-11-20 14:51:18.681928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.735 [2024-11-20 14:51:18.681962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.735 [2024-11-20 14:51:18.686689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.735 [2024-11-20 14:51:18.686751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.735 [2024-11-20 14:51:18.686771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.995 [2024-11-20 14:51:18.692008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.995 [2024-11-20 14:51:18.692080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.995 [2024-11-20 14:51:18.692101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.995 [2024-11-20 14:51:18.697133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.995 [2024-11-20 14:51:18.697216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.995 [2024-11-20 14:51:18.697237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.995 [2024-11-20 14:51:18.702355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.995 [2024-11-20 14:51:18.702449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.995 [2024-11-20 14:51:18.702469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.995 [2024-11-20 14:51:18.707755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.995 [2024-11-20 14:51:18.707902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.995 [2024-11-20 14:51:18.707921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.995 [2024-11-20 14:51:18.713522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.995 [2024-11-20 14:51:18.713588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.995 [2024-11-20 14:51:18.713608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.995 [2024-11-20 14:51:18.718621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.995 [2024-11-20 14:51:18.718675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.995 [2024-11-20 14:51:18.718693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.995 [2024-11-20 14:51:18.724332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.995 [2024-11-20 14:51:18.724390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.995 [2024-11-20 14:51:18.724409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.995 [2024-11-20 14:51:18.729610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.995 [2024-11-20 14:51:18.729696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.995 [2024-11-20 14:51:18.729716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.995 [2024-11-20 14:51:18.734971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.995 [2024-11-20 14:51:18.735040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.995 [2024-11-20 14:51:18.735063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.995 [2024-11-20 14:51:18.740764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.995 [2024-11-20 14:51:18.740821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.995 [2024-11-20 14:51:18.740841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.995 [2024-11-20 14:51:18.746465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.995 [2024-11-20 14:51:18.746522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.995 [2024-11-20 14:51:18.746541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.995 [2024-11-20 14:51:18.752184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.995 [2024-11-20 14:51:18.752249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.995 [2024-11-20 14:51:18.752268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.995 [2024-11-20 14:51:18.757580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.995 [2024-11-20 14:51:18.757635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.995 [2024-11-20 14:51:18.757654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.995 [2024-11-20 14:51:18.764333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.995 [2024-11-20 14:51:18.764456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.764475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.771639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.771785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.771805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.779382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.779524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.779543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.786584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.786644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.786664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.792664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.792721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.792740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.797929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.798020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.798040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.802854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.802912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.802931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.807627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.807692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.807712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.812520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.812588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.812608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.817297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.817369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.817389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.822180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.822244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.822263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.826899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.826960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.826979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.831812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.831868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.831888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.836478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.836539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.836558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.841099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.841156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.841176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.846107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.846176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.846195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.850913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.850998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.851018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.855608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.855674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.855694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.860504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.860561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.860581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.865294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.865352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.865371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.869932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.870009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.870029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.874630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.874708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.874730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.879306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.879382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.879402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.884076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.884138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.884157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.888871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.888935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.888963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.893573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.893648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.893668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.898632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.898817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.996 [2024-11-20 14:51:18.898836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.996 [2024-11-20 14:51:18.904879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.996 [2024-11-20 14:51:18.905036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.997 [2024-11-20 14:51:18.905056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.997 [2024-11-20 14:51:18.911450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.997 [2024-11-20 14:51:18.911596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.997 [2024-11-20 14:51:18.911614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.997 [2024-11-20 14:51:18.918842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.997 [2024-11-20 14:51:18.918970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.997 [2024-11-20 14:51:18.918989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.997 [2024-11-20 14:51:18.926020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.997 [2024-11-20 14:51:18.926137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.997 [2024-11-20 14:51:18.926160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.997 [2024-11-20 14:51:18.933680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.997 [2024-11-20 14:51:18.933843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.997 [2024-11-20 14:51:18.933862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.997 [2024-11-20 14:51:18.941151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.997 [2024-11-20 14:51:18.941323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.997 [2024-11-20 14:51:18.941342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.997 [2024-11-20 14:51:18.949039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:06.997 [2024-11-20 14:51:18.949148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.997 [2024-11-20 14:51:18.949169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.256 [2024-11-20 14:51:18.956630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.256 [2024-11-20 14:51:18.956759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.256 [2024-11-20 14:51:18.956780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.256 [2024-11-20 14:51:18.964189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.256 [2024-11-20 14:51:18.964317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.256 [2024-11-20 14:51:18.964336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.256 [2024-11-20 14:51:18.972504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.256 [2024-11-20 14:51:18.972649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.256 [2024-11-20 14:51:18.972669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.256 [2024-11-20 14:51:18.979532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.256 [2024-11-20 14:51:18.979671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:18.979691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:18.986866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:18.987007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:18.987027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:18.994004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:18.994143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:18.994163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:18.999818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:18.999874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:18.999893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.005548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.005618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.005636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.011742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.011802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.011821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.017093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.017147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.017166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.022373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.022425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.022445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.027892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.027977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.027996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.033543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.033626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.033645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.040390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.040521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.040544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.047901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.048030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.048050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.054248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.054396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.054415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.061439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.061577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.061596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.068899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.068983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.069004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.075428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.075571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.075590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.082636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.082782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.082803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.089792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.089955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.089989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.097168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.097296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.097315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.104868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.105018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.105045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.112342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.112479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.112514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.119812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.119946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.119974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.127911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.128047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.128067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.135581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.135674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.135693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.141458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.141537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.141557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.147042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.147095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.257 [2024-11-20 14:51:19.147114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.257 [2024-11-20 14:51:19.152038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.257 [2024-11-20 14:51:19.152143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.258 [2024-11-20 14:51:19.152163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.258 [2024-11-20 14:51:19.156786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.258 [2024-11-20 14:51:19.156853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.258 [2024-11-20 14:51:19.156872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.258 [2024-11-20 14:51:19.161691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.258 [2024-11-20 14:51:19.161744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.258 [2024-11-20 14:51:19.161762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.258 [2024-11-20 14:51:19.166751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.258 [2024-11-20 14:51:19.166827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.258 [2024-11-20 14:51:19.166846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.258 [2024-11-20 14:51:19.171832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.258 [2024-11-20 14:51:19.171977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.258 [2024-11-20 14:51:19.171997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.258 [2024-11-20 14:51:19.178042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.258 [2024-11-20 14:51:19.178196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.258 [2024-11-20 14:51:19.178214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:07.258 [2024-11-20 14:51:19.184985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.258 [2024-11-20 14:51:19.185120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.258 [2024-11-20 14:51:19.185139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:07.258 [2024-11-20 14:51:19.190844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.258 [2024-11-20 14:51:19.190926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.258 [2024-11-20 14:51:19.190945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:07.258 [2024-11-20 14:51:19.197313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4f4c0) with pdu=0x200016eff3c8 00:32:07.258 [2024-11-20 14:51:19.197454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:07.258 [2024-11-20 14:51:19.197473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:07.258 5953.00 IOPS, 744.12 MiB/s 00:32:07.258 Latency(us) 00:32:07.258 [2024-11-20T13:51:19.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.258 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:07.258 nvme0n1 : 2.00 5950.63 743.83 0.00 0.00 2683.97 2023.07 8377.21 00:32:07.258 [2024-11-20T13:51:19.216Z] =================================================================================================================== 00:32:07.258 [2024-11-20T13:51:19.216Z] Total : 5950.63 743.83 0.00 0.00 2683.97 2023.07 8377.21 00:32:07.258 { 00:32:07.258 "results": [ 00:32:07.258 { 00:32:07.258 "job": "nvme0n1", 00:32:07.258 "core_mask": "0x2", 00:32:07.258 "workload": "randwrite", 00:32:07.258 "status": "finished", 00:32:07.258 "queue_depth": 16, 00:32:07.258 "io_size": 131072, 00:32:07.258 "runtime": 2.004157, 00:32:07.258 "iops": 5950.6316121940545, 00:32:07.258 "mibps": 743.8289515242568, 00:32:07.258 "io_failed": 0, 00:32:07.258 "io_timeout": 0, 00:32:07.258 "avg_latency_us": 2683.974414979329, 00:32:07.258 "min_latency_us": 2023.0678260869565, 00:32:07.258 "max_latency_us": 8377.210434782608 00:32:07.258 } 00:32:07.258 ], 00:32:07.258 "core_count": 1 00:32:07.258 } 00:32:07.515 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:07.515 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:07.515 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:07.515 | .driver_specific 00:32:07.515 | .nvme_error 00:32:07.515 | .status_code 00:32:07.515 | .command_transient_transport_error' 00:32:07.515 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:07.515 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 385 > 0 )) 00:32:07.515 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1745292 00:32:07.515 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1745292 ']' 00:32:07.515 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1745292 00:32:07.515 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:07.515 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:07.515 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1745292 00:32:07.774 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:07.774 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:07.774 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1745292' 00:32:07.774 killing process with pid 1745292 00:32:07.774 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1745292 00:32:07.774 Received shutdown signal, test time was about 2.000000 seconds 00:32:07.774 00:32:07.774 Latency(us) 00:32:07.774 [2024-11-20T13:51:19.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.774 [2024-11-20T13:51:19.732Z] =================================================================================================================== 00:32:07.774 [2024-11-20T13:51:19.732Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:07.774 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1745292 00:32:07.774 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1743082 00:32:07.774 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1743082 ']' 00:32:07.774 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1743082 00:32:07.774 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:07.774 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:07.774 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1743082 00:32:07.774 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:07.774 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:07.774 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1743082' 00:32:07.774 killing process with pid 1743082 00:32:07.774 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1743082 00:32:07.774 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1743082 00:32:08.034 00:32:08.034 real 0m14.003s 00:32:08.034 user 0m26.875s 00:32:08.034 sys 0m4.573s 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:08.034 ************************************ 00:32:08.034 END TEST nvmf_digest_error 00:32:08.034 ************************************ 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:08.034 rmmod nvme_tcp 00:32:08.034 rmmod nvme_fabrics 00:32:08.034 rmmod nvme_keyring 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1743082 ']' 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1743082 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1743082 ']' 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1743082 00:32:08.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1743082) - No such process 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1743082 is not found' 00:32:08.034 Process with pid 1743082 is not found 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:08.034 14:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:10.573 00:32:10.573 real 0m36.232s 00:32:10.573 user 0m55.208s 00:32:10.573 sys 0m13.733s 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:10.573 ************************************ 00:32:10.573 END TEST nvmf_digest 00:32:10.573 ************************************ 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.573 ************************************ 00:32:10.573 START TEST nvmf_bdevperf 00:32:10.573 ************************************ 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:10.573 * Looking for test storage... 00:32:10.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:10.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.573 --rc genhtml_branch_coverage=1 00:32:10.573 --rc genhtml_function_coverage=1 00:32:10.573 --rc genhtml_legend=1 00:32:10.573 --rc geninfo_all_blocks=1 00:32:10.573 --rc geninfo_unexecuted_blocks=1 00:32:10.573 00:32:10.573 ' 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:10.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.573 --rc genhtml_branch_coverage=1 00:32:10.573 --rc genhtml_function_coverage=1 00:32:10.573 --rc genhtml_legend=1 00:32:10.573 --rc geninfo_all_blocks=1 00:32:10.573 --rc geninfo_unexecuted_blocks=1 00:32:10.573 00:32:10.573 ' 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:10.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.573 --rc genhtml_branch_coverage=1 00:32:10.573 --rc genhtml_function_coverage=1 00:32:10.573 --rc genhtml_legend=1 00:32:10.573 --rc geninfo_all_blocks=1 00:32:10.573 --rc geninfo_unexecuted_blocks=1 00:32:10.573 00:32:10.573 ' 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:10.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.573 --rc genhtml_branch_coverage=1 00:32:10.573 --rc genhtml_function_coverage=1 00:32:10.573 --rc genhtml_legend=1 00:32:10.573 --rc geninfo_all_blocks=1 00:32:10.573 --rc geninfo_unexecuted_blocks=1 00:32:10.573 00:32:10.573 ' 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.573 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:10.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:32:10.574 14:51:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:17.144 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:17.144 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:17.144 Found net devices under 0000:86:00.0: cvl_0_0 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:17.144 Found net devices under 0000:86:00.1: cvl_0_1 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:17.144 14:51:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:17.144 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:17.144 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:17.144 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:17.144 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:17.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:17.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:32:17.145 00:32:17.145 --- 10.0.0.2 ping statistics --- 00:32:17.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.145 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:17.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:17.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:32:17.145 00:32:17.145 --- 10.0.0.1 ping statistics --- 00:32:17.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.145 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1749274 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1749274 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1749274 ']' 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:17.145 14:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:17.145 [2024-11-20 14:51:28.300698] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:32:17.145 [2024-11-20 14:51:28.300744] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:17.145 [2024-11-20 14:51:28.381655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:17.145 [2024-11-20 14:51:28.423830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:17.145 [2024-11-20 14:51:28.423868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:17.145 [2024-11-20 14:51:28.423875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:17.145 [2024-11-20 14:51:28.423882] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:17.145 [2024-11-20 14:51:28.423887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:17.145 [2024-11-20 14:51:28.428968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:17.145 [2024-11-20 14:51:28.429071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.145 [2024-11-20 14:51:28.429072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:17.404 [2024-11-20 14:51:29.203832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:17.404 Malloc0 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:17.404 [2024-11-20 14:51:29.265820] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:17.404 { 00:32:17.404 "params": { 00:32:17.404 "name": "Nvme$subsystem", 00:32:17.404 "trtype": "$TEST_TRANSPORT", 00:32:17.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:17.404 "adrfam": "ipv4", 00:32:17.404 "trsvcid": "$NVMF_PORT", 00:32:17.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:17.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:17.404 "hdgst": ${hdgst:-false}, 00:32:17.404 "ddgst": ${ddgst:-false} 00:32:17.404 }, 00:32:17.404 "method": "bdev_nvme_attach_controller" 00:32:17.404 } 00:32:17.404 EOF 00:32:17.404 )") 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:32:17.404 14:51:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:17.404 "params": { 00:32:17.404 "name": "Nvme1", 00:32:17.404 "trtype": "tcp", 00:32:17.404 "traddr": "10.0.0.2", 00:32:17.404 "adrfam": "ipv4", 00:32:17.404 "trsvcid": "4420", 00:32:17.404 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:17.404 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:17.404 "hdgst": false, 00:32:17.404 "ddgst": false 00:32:17.404 }, 00:32:17.404 "method": "bdev_nvme_attach_controller" 00:32:17.404 }' 00:32:17.404 [2024-11-20 14:51:29.318866] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:32:17.404 [2024-11-20 14:51:29.318908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749489 ] 00:32:17.662 [2024-11-20 14:51:29.394352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.662 [2024-11-20 14:51:29.436326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.920 Running I/O for 1 seconds... 00:32:18.853 10959.00 IOPS, 42.81 MiB/s 00:32:18.853 Latency(us) 00:32:18.853 [2024-11-20T13:51:30.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.853 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:18.853 Verification LBA range: start 0x0 length 0x4000 00:32:18.853 Nvme1n1 : 1.01 11009.42 43.01 0.00 0.00 11579.23 1567.17 14246.96 00:32:18.853 [2024-11-20T13:51:30.811Z] =================================================================================================================== 00:32:18.853 [2024-11-20T13:51:30.811Z] Total : 11009.42 43.01 0.00 0.00 11579.23 1567.17 14246.96 00:32:19.112 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1749745 00:32:19.112 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:32:19.112 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:19.112 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:19.112 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:32:19.112 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:32:19.112 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:19.112 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:19.112 { 00:32:19.112 "params": { 00:32:19.112 "name": "Nvme$subsystem", 00:32:19.112 "trtype": "$TEST_TRANSPORT", 00:32:19.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:19.112 "adrfam": "ipv4", 00:32:19.112 "trsvcid": "$NVMF_PORT", 00:32:19.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:19.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:19.112 "hdgst": ${hdgst:-false}, 00:32:19.112 "ddgst": ${ddgst:-false} 00:32:19.112 }, 00:32:19.112 "method": "bdev_nvme_attach_controller" 00:32:19.112 } 00:32:19.112 EOF 00:32:19.112 )") 00:32:19.112 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:32:19.112 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:32:19.112 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:32:19.112 14:51:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:19.112 "params": { 00:32:19.112 "name": "Nvme1", 00:32:19.112 "trtype": "tcp", 00:32:19.112 "traddr": "10.0.0.2", 00:32:19.112 "adrfam": "ipv4", 00:32:19.112 "trsvcid": "4420", 00:32:19.112 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:19.112 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:19.112 "hdgst": false, 00:32:19.112 "ddgst": false 00:32:19.112 }, 00:32:19.112 "method": "bdev_nvme_attach_controller" 00:32:19.112 }' 00:32:19.112 [2024-11-20 14:51:30.931486] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:32:19.112 [2024-11-20 14:51:30.931536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749745 ] 00:32:19.112 [2024-11-20 14:51:31.004817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.112 [2024-11-20 14:51:31.043868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.371 Running I/O for 15 seconds... 00:32:21.683 10870.00 IOPS, 42.46 MiB/s [2024-11-20T13:51:33.899Z] 11057.50 IOPS, 43.19 MiB/s [2024-11-20T13:51:33.899Z] 14:51:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1749274 00:32:21.941 14:51:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:32:22.202 [2024-11-20 14:51:33.900653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.202 [2024-11-20 14:51:33.900696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.202 [2024-11-20 14:51:33.900714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.202 [2024-11-20 14:51:33.900723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.202 [2024-11-20 14:51:33.900732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.202 [2024-11-20 14:51:33.900740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.202 [2024-11-20 14:51:33.900749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.202 [2024-11-20 14:51:33.900757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.202 [2024-11-20 14:51:33.900767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.202 [2024-11-20 14:51:33.900774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.202 [2024-11-20 14:51:33.900788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.202 [2024-11-20 14:51:33.900795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.202 [2024-11-20 14:51:33.900804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.202 [2024-11-20 14:51:33.900811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.202 [2024-11-20 14:51:33.900820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.202 [2024-11-20 14:51:33.900827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.202 [2024-11-20 14:51:33.900836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.202 [2024-11-20 14:51:33.900843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.202 [2024-11-20 14:51:33.900852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.202 [2024-11-20 14:51:33.900861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.202 [2024-11-20 14:51:33.900869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.202 [2024-11-20 14:51:33.900876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.202 [2024-11-20 14:51:33.900885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.202 [2024-11-20 14:51:33.900892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.202 [2024-11-20 14:51:33.900901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.202 [2024-11-20 14:51:33.900908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.202 [2024-11-20 14:51:33.900917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.202 [2024-11-20 14:51:33.900924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.900933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.203 [2024-11-20 14:51:33.900940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.203 [2024-11-20 14:51:33.901073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.203 [2024-11-20 14:51:33.901090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.203 [2024-11-20 14:51:33.901109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.203 [2024-11-20 14:51:33.901125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.203 [2024-11-20 14:51:33.901141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.203 [2024-11-20 14:51:33.901157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.203 [2024-11-20 14:51:33.901172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.203 [2024-11-20 14:51:33.901187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.203 [2024-11-20 14:51:33.901202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.203 [2024-11-20 14:51:33.901442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.203 [2024-11-20 14:51:33.901624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.203 [2024-11-20 14:51:33.901632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.901990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.901997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.902005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.902012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.902020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.902027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.902036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.902043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.902051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.902058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.902067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.902075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.902084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.902091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.902099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.902106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.902114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.902121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.902129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.902136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.902144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.902151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.902159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.902166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.902174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.902181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.902189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.902196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.902204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.204 [2024-11-20 14:51:33.902211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.204 [2024-11-20 14:51:33.902219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.205 [2024-11-20 14:51:33.902779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.205 [2024-11-20 14:51:33.902787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203eae0 is same with the state(6) to be set 00:32:22.205 [2024-11-20 14:51:33.902796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:22.205 [2024-11-20 14:51:33.902801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:22.205 [2024-11-20 14:51:33.902808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95784 len:8 PRP1 0x0 PRP2 0x0 00:32:22.206 [2024-11-20 14:51:33.902822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:22.206 [2024-11-20 14:51:33.905749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.206 [2024-11-20 14:51:33.905803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.206 [2024-11-20 14:51:33.906370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.206 [2024-11-20 14:51:33.906387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.206 [2024-11-20 14:51:33.906395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.206 [2024-11-20 14:51:33.906578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.206 [2024-11-20 14:51:33.906756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.206 [2024-11-20 14:51:33.906765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.206 [2024-11-20 14:51:33.906773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.206 [2024-11-20 14:51:33.906781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.206 [2024-11-20 14:51:33.919068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.206 [2024-11-20 14:51:33.919530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.206 [2024-11-20 14:51:33.919581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.206 [2024-11-20 14:51:33.919606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.206 [2024-11-20 14:51:33.920213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.206 [2024-11-20 14:51:33.920802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.206 [2024-11-20 14:51:33.920828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.206 [2024-11-20 14:51:33.920857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.206 [2024-11-20 14:51:33.920864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.206 [2024-11-20 14:51:33.932092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.206 [2024-11-20 14:51:33.932497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.206 [2024-11-20 14:51:33.932514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.206 [2024-11-20 14:51:33.932522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.206 [2024-11-20 14:51:33.932696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.206 [2024-11-20 14:51:33.932871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.206 [2024-11-20 14:51:33.932880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.206 [2024-11-20 14:51:33.932886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.206 [2024-11-20 14:51:33.932893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.206 [2024-11-20 14:51:33.944954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.206 [2024-11-20 14:51:33.945353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.206 [2024-11-20 14:51:33.945369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.206 [2024-11-20 14:51:33.945376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.206 [2024-11-20 14:51:33.945540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.206 [2024-11-20 14:51:33.945704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.206 [2024-11-20 14:51:33.945715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.206 [2024-11-20 14:51:33.945721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.206 [2024-11-20 14:51:33.945728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.206 [2024-11-20 14:51:33.957777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.206 [2024-11-20 14:51:33.958203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.206 [2024-11-20 14:51:33.958220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.206 [2024-11-20 14:51:33.958228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.206 [2024-11-20 14:51:33.958401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.206 [2024-11-20 14:51:33.958576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.206 [2024-11-20 14:51:33.958585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.206 [2024-11-20 14:51:33.958592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.206 [2024-11-20 14:51:33.958599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.206 [2024-11-20 14:51:33.970743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.206 [2024-11-20 14:51:33.971081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.206 [2024-11-20 14:51:33.971097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.206 [2024-11-20 14:51:33.971105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.206 [2024-11-20 14:51:33.971269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.206 [2024-11-20 14:51:33.971433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.206 [2024-11-20 14:51:33.971441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.206 [2024-11-20 14:51:33.971447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.206 [2024-11-20 14:51:33.971454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.206 [2024-11-20 14:51:33.983809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.206 [2024-11-20 14:51:33.984251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.206 [2024-11-20 14:51:33.984268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.206 [2024-11-20 14:51:33.984276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.206 [2024-11-20 14:51:33.984449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.206 [2024-11-20 14:51:33.984623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.206 [2024-11-20 14:51:33.984632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.206 [2024-11-20 14:51:33.984639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.206 [2024-11-20 14:51:33.984649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.206 [2024-11-20 14:51:33.996700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.206 [2024-11-20 14:51:33.997129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.206 [2024-11-20 14:51:33.997147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.206 [2024-11-20 14:51:33.997154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.206 [2024-11-20 14:51:33.997329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.206 [2024-11-20 14:51:33.997502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.206 [2024-11-20 14:51:33.997511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.206 [2024-11-20 14:51:33.997517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.206 [2024-11-20 14:51:33.997524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.206 [2024-11-20 14:51:34.009527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.206 [2024-11-20 14:51:34.009954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.206 [2024-11-20 14:51:34.009971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.206 [2024-11-20 14:51:34.009979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.206 [2024-11-20 14:51:34.010152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.206 [2024-11-20 14:51:34.010329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.206 [2024-11-20 14:51:34.010338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.206 [2024-11-20 14:51:34.010345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.206 [2024-11-20 14:51:34.010352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.206 [2024-11-20 14:51:34.022445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.206 [2024-11-20 14:51:34.022878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.206 [2024-11-20 14:51:34.022922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.207 [2024-11-20 14:51:34.022961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.207 [2024-11-20 14:51:34.023568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.207 [2024-11-20 14:51:34.024145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.207 [2024-11-20 14:51:34.024153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.207 [2024-11-20 14:51:34.024160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.207 [2024-11-20 14:51:34.024167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.207 [2024-11-20 14:51:34.035288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.207 [2024-11-20 14:51:34.035645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.207 [2024-11-20 14:51:34.035690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.207 [2024-11-20 14:51:34.035714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.207 [2024-11-20 14:51:34.036183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.207 [2024-11-20 14:51:34.036359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.207 [2024-11-20 14:51:34.036367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.207 [2024-11-20 14:51:34.036374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.207 [2024-11-20 14:51:34.036380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.207 [2024-11-20 14:51:34.048116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.207 [2024-11-20 14:51:34.048523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.207 [2024-11-20 14:51:34.048540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.207 [2024-11-20 14:51:34.048547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.207 [2024-11-20 14:51:34.048721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.207 [2024-11-20 14:51:34.048895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.207 [2024-11-20 14:51:34.048904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.207 [2024-11-20 14:51:34.048911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.207 [2024-11-20 14:51:34.048917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.207 [2024-11-20 14:51:34.061054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.207 [2024-11-20 14:51:34.061367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.207 [2024-11-20 14:51:34.061383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.207 [2024-11-20 14:51:34.061390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.207 [2024-11-20 14:51:34.061554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.207 [2024-11-20 14:51:34.061718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.207 [2024-11-20 14:51:34.061726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.207 [2024-11-20 14:51:34.061732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.207 [2024-11-20 14:51:34.061738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.207 [2024-11-20 14:51:34.073994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.207 [2024-11-20 14:51:34.074370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.207 [2024-11-20 14:51:34.074385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.207 [2024-11-20 14:51:34.074392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.207 [2024-11-20 14:51:34.074559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.207 [2024-11-20 14:51:34.074723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.207 [2024-11-20 14:51:34.074731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.207 [2024-11-20 14:51:34.074737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.207 [2024-11-20 14:51:34.074743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.207 [2024-11-20 14:51:34.086934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.207 [2024-11-20 14:51:34.087331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.207 [2024-11-20 14:51:34.087347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.207 [2024-11-20 14:51:34.087354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.207 [2024-11-20 14:51:34.087518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.207 [2024-11-20 14:51:34.087685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.207 [2024-11-20 14:51:34.087693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.207 [2024-11-20 14:51:34.087699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.207 [2024-11-20 14:51:34.087705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.207 [2024-11-20 14:51:34.099795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.207 [2024-11-20 14:51:34.100213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.207 [2024-11-20 14:51:34.100230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.207 [2024-11-20 14:51:34.100237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.207 [2024-11-20 14:51:34.100410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.207 [2024-11-20 14:51:34.100584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.207 [2024-11-20 14:51:34.100592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.207 [2024-11-20 14:51:34.100599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.207 [2024-11-20 14:51:34.100606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.207 [2024-11-20 14:51:34.112686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.207 [2024-11-20 14:51:34.113079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.207 [2024-11-20 14:51:34.113096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.207 [2024-11-20 14:51:34.113103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.207 [2024-11-20 14:51:34.113268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.207 [2024-11-20 14:51:34.113432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.207 [2024-11-20 14:51:34.113443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.207 [2024-11-20 14:51:34.113449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.207 [2024-11-20 14:51:34.113456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.207 [2024-11-20 14:51:34.125592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.207 [2024-11-20 14:51:34.126031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.207 [2024-11-20 14:51:34.126050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.207 [2024-11-20 14:51:34.126058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.207 [2024-11-20 14:51:34.126253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.207 [2024-11-20 14:51:34.126432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.208 [2024-11-20 14:51:34.126441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.208 [2024-11-20 14:51:34.126449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.208 [2024-11-20 14:51:34.126455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.208 [2024-11-20 14:51:34.138474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.208 [2024-11-20 14:51:34.138872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.208 [2024-11-20 14:51:34.138888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.208 [2024-11-20 14:51:34.138895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.208 [2024-11-20 14:51:34.139088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.208 [2024-11-20 14:51:34.139263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.208 [2024-11-20 14:51:34.139272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.208 [2024-11-20 14:51:34.139279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.208 [2024-11-20 14:51:34.139286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.208 [2024-11-20 14:51:34.151359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.208 [2024-11-20 14:51:34.151824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.208 [2024-11-20 14:51:34.151841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.208 [2024-11-20 14:51:34.151849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.208 [2024-11-20 14:51:34.152035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.208 [2024-11-20 14:51:34.152218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.208 [2024-11-20 14:51:34.152227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.208 [2024-11-20 14:51:34.152234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.208 [2024-11-20 14:51:34.152244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.468 [2024-11-20 14:51:34.164555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.468 [2024-11-20 14:51:34.164975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.468 [2024-11-20 14:51:34.164994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.468 [2024-11-20 14:51:34.165002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.468 [2024-11-20 14:51:34.165182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.468 [2024-11-20 14:51:34.165362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.468 [2024-11-20 14:51:34.165374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.468 [2024-11-20 14:51:34.165382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.469 [2024-11-20 14:51:34.165389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.469 [2024-11-20 14:51:34.177675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.469 [2024-11-20 14:51:34.178124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.469 [2024-11-20 14:51:34.178142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.469 [2024-11-20 14:51:34.178150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.469 [2024-11-20 14:51:34.178328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.469 [2024-11-20 14:51:34.178508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.469 [2024-11-20 14:51:34.178517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.469 [2024-11-20 14:51:34.178523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.469 [2024-11-20 14:51:34.178530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.469 [2024-11-20 14:51:34.190678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.469 [2024-11-20 14:51:34.191054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.469 [2024-11-20 14:51:34.191072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.469 [2024-11-20 14:51:34.191080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.469 [2024-11-20 14:51:34.191267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.469 [2024-11-20 14:51:34.191441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.469 [2024-11-20 14:51:34.191450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.469 [2024-11-20 14:51:34.191456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.469 [2024-11-20 14:51:34.191463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.469 [2024-11-20 14:51:34.203545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.469 [2024-11-20 14:51:34.203912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.469 [2024-11-20 14:51:34.203928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.469 [2024-11-20 14:51:34.203935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.469 [2024-11-20 14:51:34.204113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.469 [2024-11-20 14:51:34.204287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.469 [2024-11-20 14:51:34.204296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.469 [2024-11-20 14:51:34.204303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.469 [2024-11-20 14:51:34.204310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.469 [2024-11-20 14:51:34.216562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.469 [2024-11-20 14:51:34.216986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.469 [2024-11-20 14:51:34.217004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.469 [2024-11-20 14:51:34.217012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.469 [2024-11-20 14:51:34.217193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.469 [2024-11-20 14:51:34.217358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.469 [2024-11-20 14:51:34.217366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.469 [2024-11-20 14:51:34.217372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.469 [2024-11-20 14:51:34.217378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.469 [2024-11-20 14:51:34.229490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.469 [2024-11-20 14:51:34.229909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.469 [2024-11-20 14:51:34.229968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.469 [2024-11-20 14:51:34.230005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.469 [2024-11-20 14:51:34.230451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.469 [2024-11-20 14:51:34.230626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.469 [2024-11-20 14:51:34.230634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.469 [2024-11-20 14:51:34.230641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.469 [2024-11-20 14:51:34.230648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.469 [2024-11-20 14:51:34.242374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.469 [2024-11-20 14:51:34.242791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.469 [2024-11-20 14:51:34.242834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.469 [2024-11-20 14:51:34.242859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.469 [2024-11-20 14:51:34.243468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.469 [2024-11-20 14:51:34.243841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.469 [2024-11-20 14:51:34.243850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.469 [2024-11-20 14:51:34.243857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.469 [2024-11-20 14:51:34.243863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.469 [2024-11-20 14:51:34.255328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.469 [2024-11-20 14:51:34.255671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.469 [2024-11-20 14:51:34.255687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.469 [2024-11-20 14:51:34.255694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.469 [2024-11-20 14:51:34.255867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.469 [2024-11-20 14:51:34.256046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.469 [2024-11-20 14:51:34.256055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.469 [2024-11-20 14:51:34.256062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.469 [2024-11-20 14:51:34.256068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.469 [2024-11-20 14:51:34.268148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.469 [2024-11-20 14:51:34.268464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.469 [2024-11-20 14:51:34.268481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.469 [2024-11-20 14:51:34.268488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.469 [2024-11-20 14:51:34.268652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.469 [2024-11-20 14:51:34.268816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.469 [2024-11-20 14:51:34.268824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.469 [2024-11-20 14:51:34.268831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.469 [2024-11-20 14:51:34.268837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.469 [2024-11-20 14:51:34.280953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.469 [2024-11-20 14:51:34.281372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.469 [2024-11-20 14:51:34.281388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.469 [2024-11-20 14:51:34.281396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.469 [2024-11-20 14:51:34.281586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.469 [2024-11-20 14:51:34.281766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.469 [2024-11-20 14:51:34.281777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.469 [2024-11-20 14:51:34.281784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.469 [2024-11-20 14:51:34.281791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.469 [2024-11-20 14:51:34.293811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.469 [2024-11-20 14:51:34.294234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.469 [2024-11-20 14:51:34.294279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.469 [2024-11-20 14:51:34.294303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.470 [2024-11-20 14:51:34.294835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.470 [2024-11-20 14:51:34.295014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.470 [2024-11-20 14:51:34.295023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.470 [2024-11-20 14:51:34.295030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.470 [2024-11-20 14:51:34.295037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.470 [2024-11-20 14:51:34.306763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.470 [2024-11-20 14:51:34.307187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.470 [2024-11-20 14:51:34.307203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.470 [2024-11-20 14:51:34.307211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.470 [2024-11-20 14:51:34.307384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.470 [2024-11-20 14:51:34.307558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.470 [2024-11-20 14:51:34.307566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.470 [2024-11-20 14:51:34.307573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.470 [2024-11-20 14:51:34.307580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.470 [2024-11-20 14:51:34.319638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.470 [2024-11-20 14:51:34.320040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.470 [2024-11-20 14:51:34.320056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.470 [2024-11-20 14:51:34.320064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.470 [2024-11-20 14:51:34.320237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.470 [2024-11-20 14:51:34.320411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.470 [2024-11-20 14:51:34.320420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.470 [2024-11-20 14:51:34.320426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.470 [2024-11-20 14:51:34.320436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.470 [2024-11-20 14:51:34.332497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.470 [2024-11-20 14:51:34.332901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.470 [2024-11-20 14:51:34.332917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.470 [2024-11-20 14:51:34.332924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.470 [2024-11-20 14:51:34.333118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.470 [2024-11-20 14:51:34.333292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.470 [2024-11-20 14:51:34.333301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.470 [2024-11-20 14:51:34.333307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.470 [2024-11-20 14:51:34.333314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.470 9416.67 IOPS, 36.78 MiB/s [2024-11-20T13:51:34.428Z] [2024-11-20 14:51:34.345342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.470 [2024-11-20 14:51:34.345776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.470 [2024-11-20 14:51:34.345794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.470 [2024-11-20 14:51:34.345801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.470 [2024-11-20 14:51:34.345980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.470 [2024-11-20 14:51:34.346154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.470 [2024-11-20 14:51:34.346162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.470 [2024-11-20 14:51:34.346169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.470 [2024-11-20 14:51:34.346176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.470 [2024-11-20 14:51:34.358217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.470 [2024-11-20 14:51:34.358649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.470 [2024-11-20 14:51:34.358693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.470 [2024-11-20 14:51:34.358717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.470 [2024-11-20 14:51:34.359318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.470 [2024-11-20 14:51:34.359920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.470 [2024-11-20 14:51:34.359929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.470 [2024-11-20 14:51:34.359935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.470 [2024-11-20 14:51:34.359942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.470 [2024-11-20 14:51:34.371148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.470 [2024-11-20 14:51:34.371575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.470 [2024-11-20 14:51:34.371591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.470 [2024-11-20 14:51:34.371599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.470 [2024-11-20 14:51:34.371772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.470 [2024-11-20 14:51:34.371956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.470 [2024-11-20 14:51:34.371966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.470 [2024-11-20 14:51:34.371972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.470 [2024-11-20 14:51:34.371979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.470 [2024-11-20 14:51:34.384102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.470 [2024-11-20 14:51:34.384492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.470 [2024-11-20 14:51:34.384508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.470 [2024-11-20 14:51:34.384516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.470 [2024-11-20 14:51:34.384679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.470 [2024-11-20 14:51:34.384843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.470 [2024-11-20 14:51:34.384851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.470 [2024-11-20 14:51:34.384857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.470 [2024-11-20 14:51:34.384863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.470 [2024-11-20 14:51:34.396904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.470 [2024-11-20 14:51:34.397302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.470 [2024-11-20 14:51:34.397319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.470 [2024-11-20 14:51:34.397326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.470 [2024-11-20 14:51:34.397499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.470 [2024-11-20 14:51:34.397673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.470 [2024-11-20 14:51:34.397682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.470 [2024-11-20 14:51:34.397688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.470 [2024-11-20 14:51:34.397694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.470 [2024-11-20 14:51:34.409790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.470 [2024-11-20 14:51:34.410207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.470 [2024-11-20 14:51:34.410224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.471 [2024-11-20 14:51:34.410232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.471 [2024-11-20 14:51:34.410416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.471 [2024-11-20 14:51:34.410596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.471 [2024-11-20 14:51:34.410605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.471 [2024-11-20 14:51:34.410611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.471 [2024-11-20 14:51:34.410618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.471 [2024-11-20 14:51:34.422988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.471 [2024-11-20 14:51:34.423398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.471 [2024-11-20 14:51:34.423416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.471 [2024-11-20 14:51:34.423424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.471 [2024-11-20 14:51:34.423603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.730 [2024-11-20 14:51:34.423783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.730 [2024-11-20 14:51:34.423793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.730 [2024-11-20 14:51:34.423800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.730 [2024-11-20 14:51:34.423807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.730 [2024-11-20 14:51:34.435977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.730 [2024-11-20 14:51:34.436384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.730 [2024-11-20 14:51:34.436401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.730 [2024-11-20 14:51:34.436408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.730 [2024-11-20 14:51:34.436583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.730 [2024-11-20 14:51:34.436756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.730 [2024-11-20 14:51:34.436765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.730 [2024-11-20 14:51:34.436772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.730 [2024-11-20 14:51:34.436778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.730 [2024-11-20 14:51:34.448862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.730 [2024-11-20 14:51:34.449278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.730 [2024-11-20 14:51:34.449295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.730 [2024-11-20 14:51:34.449303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.730 [2024-11-20 14:51:34.449477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.730 [2024-11-20 14:51:34.449650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.730 [2024-11-20 14:51:34.449662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.730 [2024-11-20 14:51:34.449669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.730 [2024-11-20 14:51:34.449675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.730 [2024-11-20 14:51:34.461723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.730 [2024-11-20 14:51:34.462146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.730 [2024-11-20 14:51:34.462163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.730 [2024-11-20 14:51:34.462171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.730 [2024-11-20 14:51:34.462346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.730 [2024-11-20 14:51:34.462519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.730 [2024-11-20 14:51:34.462528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.730 [2024-11-20 14:51:34.462535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.730 [2024-11-20 14:51:34.462541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.731 [2024-11-20 14:51:34.474687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.731 [2024-11-20 14:51:34.475142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.731 [2024-11-20 14:51:34.475159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.731 [2024-11-20 14:51:34.475166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.731 [2024-11-20 14:51:34.475330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.731 [2024-11-20 14:51:34.475495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.731 [2024-11-20 14:51:34.475504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.731 [2024-11-20 14:51:34.475510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.731 [2024-11-20 14:51:34.475516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.731 [2024-11-20 14:51:34.487645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.731 [2024-11-20 14:51:34.488042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.731 [2024-11-20 14:51:34.488059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.731 [2024-11-20 14:51:34.488067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.731 [2024-11-20 14:51:34.488244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.731 [2024-11-20 14:51:34.488409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.731 [2024-11-20 14:51:34.488418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.731 [2024-11-20 14:51:34.488424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.731 [2024-11-20 14:51:34.488434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.731 [2024-11-20 14:51:34.500572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.731 [2024-11-20 14:51:34.500969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.731 [2024-11-20 14:51:34.500985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.731 [2024-11-20 14:51:34.500992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.731 [2024-11-20 14:51:34.501156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.731 [2024-11-20 14:51:34.501321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.731 [2024-11-20 14:51:34.501329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.731 [2024-11-20 14:51:34.501336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.731 [2024-11-20 14:51:34.501342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.731 [2024-11-20 14:51:34.513485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.731 [2024-11-20 14:51:34.513899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.731 [2024-11-20 14:51:34.513944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.731 [2024-11-20 14:51:34.513983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.731 [2024-11-20 14:51:34.514413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.731 [2024-11-20 14:51:34.514587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.731 [2024-11-20 14:51:34.514596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.731 [2024-11-20 14:51:34.514602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.731 [2024-11-20 14:51:34.514609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.731 [2024-11-20 14:51:34.526440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.731 [2024-11-20 14:51:34.526854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.731 [2024-11-20 14:51:34.526899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.731 [2024-11-20 14:51:34.526922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.731 [2024-11-20 14:51:34.527390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.731 [2024-11-20 14:51:34.527569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.731 [2024-11-20 14:51:34.527578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.731 [2024-11-20 14:51:34.527585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.731 [2024-11-20 14:51:34.527592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.731 [2024-11-20 14:51:34.539348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.731 [2024-11-20 14:51:34.539766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.731 [2024-11-20 14:51:34.539784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.731 [2024-11-20 14:51:34.539791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.731 [2024-11-20 14:51:34.539970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.731 [2024-11-20 14:51:34.540146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.731 [2024-11-20 14:51:34.540154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.731 [2024-11-20 14:51:34.540161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.731 [2024-11-20 14:51:34.540167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.731 [2024-11-20 14:51:34.552300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.731 [2024-11-20 14:51:34.552684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.731 [2024-11-20 14:51:34.552701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.731 [2024-11-20 14:51:34.552708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.731 [2024-11-20 14:51:34.552872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.731 [2024-11-20 14:51:34.553063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.731 [2024-11-20 14:51:34.553072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.731 [2024-11-20 14:51:34.553079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.731 [2024-11-20 14:51:34.553085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.731 [2024-11-20 14:51:34.565206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.731 [2024-11-20 14:51:34.565622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.731 [2024-11-20 14:51:34.565639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.731 [2024-11-20 14:51:34.565646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.731 [2024-11-20 14:51:34.565820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.731 [2024-11-20 14:51:34.566000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.731 [2024-11-20 14:51:34.566008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.731 [2024-11-20 14:51:34.566015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.731 [2024-11-20 14:51:34.566022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.731 [2024-11-20 14:51:34.578281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.731 [2024-11-20 14:51:34.578745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.731 [2024-11-20 14:51:34.578789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.731 [2024-11-20 14:51:34.578813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.731 [2024-11-20 14:51:34.579417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.731 [2024-11-20 14:51:34.579592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.731 [2024-11-20 14:51:34.579601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.731 [2024-11-20 14:51:34.579607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.731 [2024-11-20 14:51:34.579614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.731 [2024-11-20 14:51:34.591181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.732 [2024-11-20 14:51:34.591646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.732 [2024-11-20 14:51:34.591690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.732 [2024-11-20 14:51:34.591714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.732 [2024-11-20 14:51:34.592313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.732 [2024-11-20 14:51:34.592501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.732 [2024-11-20 14:51:34.592509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.732 [2024-11-20 14:51:34.592516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.732 [2024-11-20 14:51:34.592522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.732 [2024-11-20 14:51:34.604018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.732 [2024-11-20 14:51:34.604384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.732 [2024-11-20 14:51:34.604401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.732 [2024-11-20 14:51:34.604408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.732 [2024-11-20 14:51:34.604582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.732 [2024-11-20 14:51:34.604755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.732 [2024-11-20 14:51:34.604764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.732 [2024-11-20 14:51:34.604771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.732 [2024-11-20 14:51:34.604777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.732 [2024-11-20 14:51:34.616973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.732 [2024-11-20 14:51:34.617334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.732 [2024-11-20 14:51:34.617377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.732 [2024-11-20 14:51:34.617401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.732 [2024-11-20 14:51:34.618000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.732 [2024-11-20 14:51:34.618510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.732 [2024-11-20 14:51:34.618522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.732 [2024-11-20 14:51:34.618529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.732 [2024-11-20 14:51:34.618535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.732 [2024-11-20 14:51:34.629933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.732 [2024-11-20 14:51:34.630373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.732 [2024-11-20 14:51:34.630389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.732 [2024-11-20 14:51:34.630396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.732 [2024-11-20 14:51:34.630560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.732 [2024-11-20 14:51:34.630725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.732 [2024-11-20 14:51:34.630733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.732 [2024-11-20 14:51:34.630739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.732 [2024-11-20 14:51:34.630746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.732 [2024-11-20 14:51:34.642854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.732 [2024-11-20 14:51:34.643277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.732 [2024-11-20 14:51:34.643294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.732 [2024-11-20 14:51:34.643302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.732 [2024-11-20 14:51:34.643475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.732 [2024-11-20 14:51:34.643649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.732 [2024-11-20 14:51:34.643658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.732 [2024-11-20 14:51:34.643665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.732 [2024-11-20 14:51:34.643671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.732 [2024-11-20 14:51:34.655799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.732 [2024-11-20 14:51:34.656095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.732 [2024-11-20 14:51:34.656111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.732 [2024-11-20 14:51:34.656119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.732 [2024-11-20 14:51:34.656292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.732 [2024-11-20 14:51:34.656467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.732 [2024-11-20 14:51:34.656475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.732 [2024-11-20 14:51:34.656482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.732 [2024-11-20 14:51:34.656492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.732 [2024-11-20 14:51:34.668762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.732 [2024-11-20 14:51:34.669133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.732 [2024-11-20 14:51:34.669151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.732 [2024-11-20 14:51:34.669159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.732 [2024-11-20 14:51:34.669339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.732 [2024-11-20 14:51:34.669519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.732 [2024-11-20 14:51:34.669528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.732 [2024-11-20 14:51:34.669535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.732 [2024-11-20 14:51:34.669541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.732 [2024-11-20 14:51:34.681980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.732 [2024-11-20 14:51:34.682335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.732 [2024-11-20 14:51:34.682354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.732 [2024-11-20 14:51:34.682362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.732 [2024-11-20 14:51:34.682543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.732 [2024-11-20 14:51:34.682723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.732 [2024-11-20 14:51:34.682732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.732 [2024-11-20 14:51:34.682739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.732 [2024-11-20 14:51:34.682746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.992 [2024-11-20 14:51:34.695211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.992 [2024-11-20 14:51:34.695586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.992 [2024-11-20 14:51:34.695603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.992 [2024-11-20 14:51:34.695611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.992 [2024-11-20 14:51:34.695790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.992 [2024-11-20 14:51:34.695976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.992 [2024-11-20 14:51:34.695986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.992 [2024-11-20 14:51:34.695992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.992 [2024-11-20 14:51:34.695999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.992 [2024-11-20 14:51:34.708247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.992 [2024-11-20 14:51:34.708622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.992 [2024-11-20 14:51:34.708638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.992 [2024-11-20 14:51:34.708646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.992 [2024-11-20 14:51:34.708819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.992 [2024-11-20 14:51:34.709002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.992 [2024-11-20 14:51:34.709011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.992 [2024-11-20 14:51:34.709018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.992 [2024-11-20 14:51:34.709025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.992 [2024-11-20 14:51:34.721168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.992 [2024-11-20 14:51:34.721599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.992 [2024-11-20 14:51:34.721616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.992 [2024-11-20 14:51:34.721623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.992 [2024-11-20 14:51:34.721798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.992 [2024-11-20 14:51:34.721981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.992 [2024-11-20 14:51:34.721991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.992 [2024-11-20 14:51:34.721997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.992 [2024-11-20 14:51:34.722004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.992 [2024-11-20 14:51:34.734067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.992 [2024-11-20 14:51:34.734488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.992 [2024-11-20 14:51:34.734505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.992 [2024-11-20 14:51:34.734513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.992 [2024-11-20 14:51:34.734687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.992 [2024-11-20 14:51:34.734861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.992 [2024-11-20 14:51:34.734870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.992 [2024-11-20 14:51:34.734877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.992 [2024-11-20 14:51:34.734884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.992 [2024-11-20 14:51:34.746966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.992 [2024-11-20 14:51:34.747388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.992 [2024-11-20 14:51:34.747406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.993 [2024-11-20 14:51:34.747413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.993 [2024-11-20 14:51:34.747590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.993 [2024-11-20 14:51:34.747764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.993 [2024-11-20 14:51:34.747773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.993 [2024-11-20 14:51:34.747779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.993 [2024-11-20 14:51:34.747785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.993 [2024-11-20 14:51:34.760142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.993 [2024-11-20 14:51:34.760506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.993 [2024-11-20 14:51:34.760523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.993 [2024-11-20 14:51:34.760531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.993 [2024-11-20 14:51:34.760710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.993 [2024-11-20 14:51:34.760889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.993 [2024-11-20 14:51:34.760898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.993 [2024-11-20 14:51:34.760905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.993 [2024-11-20 14:51:34.760912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.993 [2024-11-20 14:51:34.773215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.993 [2024-11-20 14:51:34.773582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.993 [2024-11-20 14:51:34.773599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.993 [2024-11-20 14:51:34.773607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.993 [2024-11-20 14:51:34.773786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.993 [2024-11-20 14:51:34.773973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.993 [2024-11-20 14:51:34.773983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.993 [2024-11-20 14:51:34.773990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.993 [2024-11-20 14:51:34.773997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.993 [2024-11-20 14:51:34.786233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.993 [2024-11-20 14:51:34.786627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.993 [2024-11-20 14:51:34.786644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.993 [2024-11-20 14:51:34.786651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.993 [2024-11-20 14:51:34.786824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.993 [2024-11-20 14:51:34.787005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.993 [2024-11-20 14:51:34.787017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.993 [2024-11-20 14:51:34.787024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.993 [2024-11-20 14:51:34.787030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.993 [2024-11-20 14:51:34.799277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.993 [2024-11-20 14:51:34.799663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.993 [2024-11-20 14:51:34.799680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.993 [2024-11-20 14:51:34.799687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.993 [2024-11-20 14:51:34.799861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.993 [2024-11-20 14:51:34.800042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.993 [2024-11-20 14:51:34.800052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.993 [2024-11-20 14:51:34.800058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.993 [2024-11-20 14:51:34.800065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.993 [2024-11-20 14:51:34.812306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.993 [2024-11-20 14:51:34.812662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.993 [2024-11-20 14:51:34.812679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.993 [2024-11-20 14:51:34.812686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.993 [2024-11-20 14:51:34.812865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.993 [2024-11-20 14:51:34.813051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.993 [2024-11-20 14:51:34.813061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.993 [2024-11-20 14:51:34.813068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.993 [2024-11-20 14:51:34.813074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.993 [2024-11-20 14:51:34.825181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.993 [2024-11-20 14:51:34.825487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.993 [2024-11-20 14:51:34.825530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.993 [2024-11-20 14:51:34.825554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.993 [2024-11-20 14:51:34.826054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.993 [2024-11-20 14:51:34.826235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.993 [2024-11-20 14:51:34.826244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.993 [2024-11-20 14:51:34.826250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.993 [2024-11-20 14:51:34.826261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.993 [2024-11-20 14:51:34.838175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.993 [2024-11-20 14:51:34.838539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.993 [2024-11-20 14:51:34.838583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.993 [2024-11-20 14:51:34.838608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.993 [2024-11-20 14:51:34.839117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.993 [2024-11-20 14:51:34.839292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.993 [2024-11-20 14:51:34.839301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.993 [2024-11-20 14:51:34.839308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.993 [2024-11-20 14:51:34.839314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.993 [2024-11-20 14:51:34.851153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.993 [2024-11-20 14:51:34.851512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.993 [2024-11-20 14:51:34.851530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.993 [2024-11-20 14:51:34.851538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.993 [2024-11-20 14:51:34.851712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.993 [2024-11-20 14:51:34.851886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.993 [2024-11-20 14:51:34.851894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.993 [2024-11-20 14:51:34.851901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.993 [2024-11-20 14:51:34.851908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.993 [2024-11-20 14:51:34.864183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.993 [2024-11-20 14:51:34.864550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.993 [2024-11-20 14:51:34.864567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.993 [2024-11-20 14:51:34.864574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.993 [2024-11-20 14:51:34.865171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.993 [2024-11-20 14:51:34.865345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.993 [2024-11-20 14:51:34.865354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.993 [2024-11-20 14:51:34.865361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.993 [2024-11-20 14:51:34.865367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.993 [2024-11-20 14:51:34.877075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.993 [2024-11-20 14:51:34.877417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.994 [2024-11-20 14:51:34.877461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.994 [2024-11-20 14:51:34.877485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.994 [2024-11-20 14:51:34.877985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.994 [2024-11-20 14:51:34.878160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.994 [2024-11-20 14:51:34.878169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.994 [2024-11-20 14:51:34.878175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.994 [2024-11-20 14:51:34.878182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.994 [2024-11-20 14:51:34.890136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.994 [2024-11-20 14:51:34.890509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.994 [2024-11-20 14:51:34.890526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.994 [2024-11-20 14:51:34.890534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.994 [2024-11-20 14:51:34.890708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.994 [2024-11-20 14:51:34.890882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.994 [2024-11-20 14:51:34.890890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.994 [2024-11-20 14:51:34.890897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.994 [2024-11-20 14:51:34.890904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.994 [2024-11-20 14:51:34.903172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.994 [2024-11-20 14:51:34.903519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.994 [2024-11-20 14:51:34.903535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.994 [2024-11-20 14:51:34.903542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.994 [2024-11-20 14:51:34.903715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.994 [2024-11-20 14:51:34.903889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.994 [2024-11-20 14:51:34.903904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.994 [2024-11-20 14:51:34.903910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.994 [2024-11-20 14:51:34.903916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.994 [2024-11-20 14:51:34.916187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.994 [2024-11-20 14:51:34.916524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.994 [2024-11-20 14:51:34.916541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.994 [2024-11-20 14:51:34.916548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.994 [2024-11-20 14:51:34.916725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.994 [2024-11-20 14:51:34.916900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.994 [2024-11-20 14:51:34.916909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.994 [2024-11-20 14:51:34.916915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.994 [2024-11-20 14:51:34.916922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.994 [2024-11-20 14:51:34.929230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.994 [2024-11-20 14:51:34.929580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.994 [2024-11-20 14:51:34.929597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.994 [2024-11-20 14:51:34.929605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.994 [2024-11-20 14:51:34.929780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.994 [2024-11-20 14:51:34.929966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.994 [2024-11-20 14:51:34.929992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.994 [2024-11-20 14:51:34.930001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.994 [2024-11-20 14:51:34.930008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:22.994 [2024-11-20 14:51:34.942438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:22.994 [2024-11-20 14:51:34.942782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:22.994 [2024-11-20 14:51:34.942799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:22.994 [2024-11-20 14:51:34.942810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:22.994 [2024-11-20 14:51:34.943001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:22.994 [2024-11-20 14:51:34.943182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:22.994 [2024-11-20 14:51:34.943191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:22.994 [2024-11-20 14:51:34.943198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:22.994 [2024-11-20 14:51:34.943205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.254 [2024-11-20 14:51:34.955517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.254 [2024-11-20 14:51:34.955890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.254 [2024-11-20 14:51:34.955935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.254 [2024-11-20 14:51:34.955973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.254 [2024-11-20 14:51:34.956489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.254 [2024-11-20 14:51:34.956668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.254 [2024-11-20 14:51:34.956680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.254 [2024-11-20 14:51:34.956687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.254 [2024-11-20 14:51:34.956694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.254 [2024-11-20 14:51:34.968455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.254 [2024-11-20 14:51:34.968747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.254 [2024-11-20 14:51:34.968764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.254 [2024-11-20 14:51:34.968771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.254 [2024-11-20 14:51:34.968946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.254 [2024-11-20 14:51:34.969126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.254 [2024-11-20 14:51:34.969134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.254 [2024-11-20 14:51:34.969140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.254 [2024-11-20 14:51:34.969147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.254 [2024-11-20 14:51:34.981408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.254 [2024-11-20 14:51:34.981835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.254 [2024-11-20 14:51:34.981852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.254 [2024-11-20 14:51:34.981860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.254 [2024-11-20 14:51:34.982039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.254 [2024-11-20 14:51:34.982214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.254 [2024-11-20 14:51:34.982223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.254 [2024-11-20 14:51:34.982230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.254 [2024-11-20 14:51:34.982236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.254 [2024-11-20 14:51:34.994431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.254 [2024-11-20 14:51:34.994771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.254 [2024-11-20 14:51:34.994787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.254 [2024-11-20 14:51:34.994795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.254 [2024-11-20 14:51:34.994975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.254 [2024-11-20 14:51:34.995150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.254 [2024-11-20 14:51:34.995159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.254 [2024-11-20 14:51:34.995165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.254 [2024-11-20 14:51:34.995177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.254 [2024-11-20 14:51:35.007482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.254 [2024-11-20 14:51:35.007787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.254 [2024-11-20 14:51:35.007803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.254 [2024-11-20 14:51:35.007811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.254 [2024-11-20 14:51:35.007991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.254 [2024-11-20 14:51:35.008165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.254 [2024-11-20 14:51:35.008174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.254 [2024-11-20 14:51:35.008181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.254 [2024-11-20 14:51:35.008187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.254 [2024-11-20 14:51:35.020439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.254 [2024-11-20 14:51:35.020865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.254 [2024-11-20 14:51:35.020881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.254 [2024-11-20 14:51:35.020889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.254 [2024-11-20 14:51:35.021068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.254 [2024-11-20 14:51:35.021242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.254 [2024-11-20 14:51:35.021251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.254 [2024-11-20 14:51:35.021258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.254 [2024-11-20 14:51:35.021265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.254 [2024-11-20 14:51:35.033329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.254 [2024-11-20 14:51:35.033698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.254 [2024-11-20 14:51:35.033715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.254 [2024-11-20 14:51:35.033722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.254 [2024-11-20 14:51:35.033896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.254 [2024-11-20 14:51:35.034078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.254 [2024-11-20 14:51:35.034086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.254 [2024-11-20 14:51:35.034093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.254 [2024-11-20 14:51:35.034100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.254 [2024-11-20 14:51:35.046302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.254 [2024-11-20 14:51:35.046671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.254 [2024-11-20 14:51:35.046687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.254 [2024-11-20 14:51:35.046694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.254 [2024-11-20 14:51:35.046868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.254 [2024-11-20 14:51:35.047050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.254 [2024-11-20 14:51:35.047059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.255 [2024-11-20 14:51:35.047066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.255 [2024-11-20 14:51:35.047072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.255 [2024-11-20 14:51:35.059368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.255 [2024-11-20 14:51:35.059773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.255 [2024-11-20 14:51:35.059790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.255 [2024-11-20 14:51:35.059797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.255 [2024-11-20 14:51:35.059978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.255 [2024-11-20 14:51:35.060153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.255 [2024-11-20 14:51:35.060161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.255 [2024-11-20 14:51:35.060168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.255 [2024-11-20 14:51:35.060174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.255 [2024-11-20 14:51:35.072348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.255 [2024-11-20 14:51:35.072705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.255 [2024-11-20 14:51:35.072721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.255 [2024-11-20 14:51:35.072729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.255 [2024-11-20 14:51:35.072902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.255 [2024-11-20 14:51:35.073082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.255 [2024-11-20 14:51:35.073091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.255 [2024-11-20 14:51:35.073098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.255 [2024-11-20 14:51:35.073105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.255 [2024-11-20 14:51:35.085357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.255 [2024-11-20 14:51:35.085806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.255 [2024-11-20 14:51:35.085849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.255 [2024-11-20 14:51:35.085874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.255 [2024-11-20 14:51:35.086469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.255 [2024-11-20 14:51:35.086643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.255 [2024-11-20 14:51:35.086651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.255 [2024-11-20 14:51:35.086658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.255 [2024-11-20 14:51:35.086664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.255 [2024-11-20 14:51:35.098252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.255 [2024-11-20 14:51:35.098723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.255 [2024-11-20 14:51:35.098768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.255 [2024-11-20 14:51:35.098791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.255 [2024-11-20 14:51:35.099391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.255 [2024-11-20 14:51:35.099872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.255 [2024-11-20 14:51:35.099880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.255 [2024-11-20 14:51:35.099887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.255 [2024-11-20 14:51:35.099893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.255 [2024-11-20 14:51:35.111170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.255 [2024-11-20 14:51:35.111617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.255 [2024-11-20 14:51:35.111672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.255 [2024-11-20 14:51:35.111696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.255 [2024-11-20 14:51:35.112295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.255 [2024-11-20 14:51:35.112838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.255 [2024-11-20 14:51:35.112855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.255 [2024-11-20 14:51:35.112870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.255 [2024-11-20 14:51:35.112883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.255 [2024-11-20 14:51:35.126607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.255 [2024-11-20 14:51:35.127037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.255 [2024-11-20 14:51:35.127059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.255 [2024-11-20 14:51:35.127070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.255 [2024-11-20 14:51:35.127325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.255 [2024-11-20 14:51:35.127581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.255 [2024-11-20 14:51:35.127597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.255 [2024-11-20 14:51:35.127606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.255 [2024-11-20 14:51:35.127616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.255 [2024-11-20 14:51:35.139587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.255 [2024-11-20 14:51:35.140018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.255 [2024-11-20 14:51:35.140057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.255 [2024-11-20 14:51:35.140083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.255 [2024-11-20 14:51:35.140622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.255 [2024-11-20 14:51:35.140795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.255 [2024-11-20 14:51:35.140804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.255 [2024-11-20 14:51:35.140811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.255 [2024-11-20 14:51:35.140817] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.255 [2024-11-20 14:51:35.152552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.255 [2024-11-20 14:51:35.152907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.255 [2024-11-20 14:51:35.152965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.255 [2024-11-20 14:51:35.152991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.255 [2024-11-20 14:51:35.153574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.255 [2024-11-20 14:51:35.153748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.255 [2024-11-20 14:51:35.153757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.255 [2024-11-20 14:51:35.153764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.255 [2024-11-20 14:51:35.153770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.255 [2024-11-20 14:51:35.167412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.255 [2024-11-20 14:51:35.167940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.255 [2024-11-20 14:51:35.168010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.255 [2024-11-20 14:51:35.168035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.255 [2024-11-20 14:51:35.168595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.255 [2024-11-20 14:51:35.168851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.255 [2024-11-20 14:51:35.168863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.255 [2024-11-20 14:51:35.168873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.255 [2024-11-20 14:51:35.168886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.255 [2024-11-20 14:51:35.180346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.255 [2024-11-20 14:51:35.180789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.255 [2024-11-20 14:51:35.180806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.255 [2024-11-20 14:51:35.180813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.255 [2024-11-20 14:51:35.180993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.255 [2024-11-20 14:51:35.181187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.256 [2024-11-20 14:51:35.181197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.256 [2024-11-20 14:51:35.181203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.256 [2024-11-20 14:51:35.181210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.256 [2024-11-20 14:51:35.193413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.256 [2024-11-20 14:51:35.193847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.256 [2024-11-20 14:51:35.193879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.256 [2024-11-20 14:51:35.193903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.256 [2024-11-20 14:51:35.194504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.256 [2024-11-20 14:51:35.195044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.256 [2024-11-20 14:51:35.195053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.256 [2024-11-20 14:51:35.195060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.256 [2024-11-20 14:51:35.195067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.256 [2024-11-20 14:51:35.206474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.256 [2024-11-20 14:51:35.206944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.256 [2024-11-20 14:51:35.206967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.256 [2024-11-20 14:51:35.206975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.256 [2024-11-20 14:51:35.207166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.256 [2024-11-20 14:51:35.207346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.256 [2024-11-20 14:51:35.207355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.256 [2024-11-20 14:51:35.207362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.256 [2024-11-20 14:51:35.207369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.517 [2024-11-20 14:51:35.219494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.517 [2024-11-20 14:51:35.219941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.517 [2024-11-20 14:51:35.220001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.517 [2024-11-20 14:51:35.220025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.517 [2024-11-20 14:51:35.220612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.517 [2024-11-20 14:51:35.221032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.517 [2024-11-20 14:51:35.221041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.517 [2024-11-20 14:51:35.221048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.517 [2024-11-20 14:51:35.221055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.517 [2024-11-20 14:51:35.232496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.517 [2024-11-20 14:51:35.232847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.517 [2024-11-20 14:51:35.232864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.517 [2024-11-20 14:51:35.232872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.517 [2024-11-20 14:51:35.233050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.517 [2024-11-20 14:51:35.233224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.517 [2024-11-20 14:51:35.233233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.517 [2024-11-20 14:51:35.233240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.517 [2024-11-20 14:51:35.233246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.517 [2024-11-20 14:51:35.245631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.517 [2024-11-20 14:51:35.245996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.517 [2024-11-20 14:51:35.246014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.517 [2024-11-20 14:51:35.246021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.517 [2024-11-20 14:51:35.246200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.517 [2024-11-20 14:51:35.246379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.517 [2024-11-20 14:51:35.246388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.517 [2024-11-20 14:51:35.246395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.517 [2024-11-20 14:51:35.246402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.517 [2024-11-20 14:51:35.258552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.517 [2024-11-20 14:51:35.258970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.517 [2024-11-20 14:51:35.258987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.517 [2024-11-20 14:51:35.258995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.517 [2024-11-20 14:51:35.259171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.517 [2024-11-20 14:51:35.259345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.517 [2024-11-20 14:51:35.259353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.517 [2024-11-20 14:51:35.259360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.517 [2024-11-20 14:51:35.259366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.517 [2024-11-20 14:51:35.271468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.517 [2024-11-20 14:51:35.271897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.517 [2024-11-20 14:51:35.271943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.517 [2024-11-20 14:51:35.271983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.517 [2024-11-20 14:51:35.272569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.517 [2024-11-20 14:51:35.273167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.517 [2024-11-20 14:51:35.273195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.517 [2024-11-20 14:51:35.273227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.517 [2024-11-20 14:51:35.273234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.517 [2024-11-20 14:51:35.284438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.517 [2024-11-20 14:51:35.284857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.517 [2024-11-20 14:51:35.284873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.517 [2024-11-20 14:51:35.284880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.517 [2024-11-20 14:51:35.285070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.517 [2024-11-20 14:51:35.285245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.517 [2024-11-20 14:51:35.285253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.517 [2024-11-20 14:51:35.285260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.517 [2024-11-20 14:51:35.285267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.517 [2024-11-20 14:51:35.297290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.517 [2024-11-20 14:51:35.297721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.517 [2024-11-20 14:51:35.297766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.517 [2024-11-20 14:51:35.297790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.517 [2024-11-20 14:51:35.298286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.517 [2024-11-20 14:51:35.298460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.517 [2024-11-20 14:51:35.298472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.517 [2024-11-20 14:51:35.298479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.517 [2024-11-20 14:51:35.298486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.517 [2024-11-20 14:51:35.310123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.517 [2024-11-20 14:51:35.310548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.517 [2024-11-20 14:51:35.310564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.517 [2024-11-20 14:51:35.310571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.517 [2024-11-20 14:51:35.310735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.517 [2024-11-20 14:51:35.310898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.517 [2024-11-20 14:51:35.310906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.517 [2024-11-20 14:51:35.310913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.517 [2024-11-20 14:51:35.310919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.517 [2024-11-20 14:51:35.322932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.517 [2024-11-20 14:51:35.323369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.517 [2024-11-20 14:51:35.323414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.517 [2024-11-20 14:51:35.323439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.517 [2024-11-20 14:51:35.324041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.517 [2024-11-20 14:51:35.324215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.517 [2024-11-20 14:51:35.324224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.517 [2024-11-20 14:51:35.324230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.517 [2024-11-20 14:51:35.324237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.517 [2024-11-20 14:51:35.335839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.517 [2024-11-20 14:51:35.336295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.518 [2024-11-20 14:51:35.336331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.518 [2024-11-20 14:51:35.336356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.518 [2024-11-20 14:51:35.336941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.518 [2024-11-20 14:51:35.337489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.518 [2024-11-20 14:51:35.337498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.518 [2024-11-20 14:51:35.337505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.518 [2024-11-20 14:51:35.337515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.518 7062.50 IOPS, 27.59 MiB/s [2024-11-20T13:51:35.476Z] [2024-11-20 14:51:35.348694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.518 [2024-11-20 14:51:35.349118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.518 [2024-11-20 14:51:35.349134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.518 [2024-11-20 14:51:35.349142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.518 [2024-11-20 14:51:35.349306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.518 [2024-11-20 14:51:35.349470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.518 [2024-11-20 14:51:35.349478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.518 [2024-11-20 14:51:35.349484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.518 [2024-11-20 14:51:35.349491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.518 [2024-11-20 14:51:35.361600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.518 [2024-11-20 14:51:35.362018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.518 [2024-11-20 14:51:35.362035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.518 [2024-11-20 14:51:35.362042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.518 [2024-11-20 14:51:35.362207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.518 [2024-11-20 14:51:35.362372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.518 [2024-11-20 14:51:35.362380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.518 [2024-11-20 14:51:35.362386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.518 [2024-11-20 14:51:35.362392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.518 [2024-11-20 14:51:35.374509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.518 [2024-11-20 14:51:35.374867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.518 [2024-11-20 14:51:35.374883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.518 [2024-11-20 14:51:35.374890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.518 [2024-11-20 14:51:35.375080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.518 [2024-11-20 14:51:35.375253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.518 [2024-11-20 14:51:35.375262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.518 [2024-11-20 14:51:35.375269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.518 [2024-11-20 14:51:35.375276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.518 [2024-11-20 14:51:35.387466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.518 [2024-11-20 14:51:35.387890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.518 [2024-11-20 14:51:35.387905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.518 [2024-11-20 14:51:35.387912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.518 [2024-11-20 14:51:35.388103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.518 [2024-11-20 14:51:35.388278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.518 [2024-11-20 14:51:35.388286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.518 [2024-11-20 14:51:35.388293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.518 [2024-11-20 14:51:35.388299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.518 [2024-11-20 14:51:35.400346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.518 [2024-11-20 14:51:35.400749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.518 [2024-11-20 14:51:35.400766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.518 [2024-11-20 14:51:35.400773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.518 [2024-11-20 14:51:35.400937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.518 [2024-11-20 14:51:35.401129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.518 [2024-11-20 14:51:35.401138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.518 [2024-11-20 14:51:35.401145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.518 [2024-11-20 14:51:35.401152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.518 [2024-11-20 14:51:35.413330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.518 [2024-11-20 14:51:35.413778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.518 [2024-11-20 14:51:35.413834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.518 [2024-11-20 14:51:35.413858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.518 [2024-11-20 14:51:35.414382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.518 [2024-11-20 14:51:35.414557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.518 [2024-11-20 14:51:35.414565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.518 [2024-11-20 14:51:35.414572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.518 [2024-11-20 14:51:35.414578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.518 [2024-11-20 14:51:35.426201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.518 [2024-11-20 14:51:35.426588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.518 [2024-11-20 14:51:35.426632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.518 [2024-11-20 14:51:35.426655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.518 [2024-11-20 14:51:35.427263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.518 [2024-11-20 14:51:35.427656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.518 [2024-11-20 14:51:35.427665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.518 [2024-11-20 14:51:35.427672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.518 [2024-11-20 14:51:35.427678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.518 [2024-11-20 14:51:35.439138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.518 [2024-11-20 14:51:35.439580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.518 [2024-11-20 14:51:35.439597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.518 [2024-11-20 14:51:35.439605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.518 [2024-11-20 14:51:35.439779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.518 [2024-11-20 14:51:35.439959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.518 [2024-11-20 14:51:35.439968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.518 [2024-11-20 14:51:35.439976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.518 [2024-11-20 14:51:35.439983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.518 [2024-11-20 14:51:35.452323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.518 [2024-11-20 14:51:35.452682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.518 [2024-11-20 14:51:35.452700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.518 [2024-11-20 14:51:35.452708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.518 [2024-11-20 14:51:35.452887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.518 [2024-11-20 14:51:35.453072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.518 [2024-11-20 14:51:35.453082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.518 [2024-11-20 14:51:35.453088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.519 [2024-11-20 14:51:35.453095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.519 [2024-11-20 14:51:35.465244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.519 [2024-11-20 14:51:35.465692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.519 [2024-11-20 14:51:35.465709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.519 [2024-11-20 14:51:35.465716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.519 [2024-11-20 14:51:35.465889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.519 [2024-11-20 14:51:35.466087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.519 [2024-11-20 14:51:35.466099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.519 [2024-11-20 14:51:35.466106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.519 [2024-11-20 14:51:35.466113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.779 [2024-11-20 14:51:35.478258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.779 [2024-11-20 14:51:35.478716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.779 [2024-11-20 14:51:35.478767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.779 [2024-11-20 14:51:35.478792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.779 [2024-11-20 14:51:35.479393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.779 [2024-11-20 14:51:35.479842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.779 [2024-11-20 14:51:35.479850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.779 [2024-11-20 14:51:35.479857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.779 [2024-11-20 14:51:35.479864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.779 [2024-11-20 14:51:35.491204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.779 [2024-11-20 14:51:35.491623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.779 [2024-11-20 14:51:35.491638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.779 [2024-11-20 14:51:35.491662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.779 [2024-11-20 14:51:35.491837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.779 [2024-11-20 14:51:35.492033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.779 [2024-11-20 14:51:35.492043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.779 [2024-11-20 14:51:35.492050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.779 [2024-11-20 14:51:35.492056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.779 [2024-11-20 14:51:35.504026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.779 [2024-11-20 14:51:35.504451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.779 [2024-11-20 14:51:35.504495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.779 [2024-11-20 14:51:35.504519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.779 [2024-11-20 14:51:35.505119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.779 [2024-11-20 14:51:35.505698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.779 [2024-11-20 14:51:35.505707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.779 [2024-11-20 14:51:35.505714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.779 [2024-11-20 14:51:35.505724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.779 [2024-11-20 14:51:35.516836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.779 [2024-11-20 14:51:35.517256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.779 [2024-11-20 14:51:35.517305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.779 [2024-11-20 14:51:35.517330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.779 [2024-11-20 14:51:35.517840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.779 [2024-11-20 14:51:35.518019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.779 [2024-11-20 14:51:35.518028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.779 [2024-11-20 14:51:35.518035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.779 [2024-11-20 14:51:35.518041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.779 [2024-11-20 14:51:35.529680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.779 [2024-11-20 14:51:35.530027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.779 [2024-11-20 14:51:35.530043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.779 [2024-11-20 14:51:35.530051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.779 [2024-11-20 14:51:35.530214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.779 [2024-11-20 14:51:35.530378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.779 [2024-11-20 14:51:35.530386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.779 [2024-11-20 14:51:35.530393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.779 [2024-11-20 14:51:35.530399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.779 [2024-11-20 14:51:35.542490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.779 [2024-11-20 14:51:35.542915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.779 [2024-11-20 14:51:35.542931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.779 [2024-11-20 14:51:35.542938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.779 [2024-11-20 14:51:35.543132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.779 [2024-11-20 14:51:35.543307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.779 [2024-11-20 14:51:35.543316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.779 [2024-11-20 14:51:35.543323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.779 [2024-11-20 14:51:35.543329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.779 [2024-11-20 14:51:35.555297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.779 [2024-11-20 14:51:35.555721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.779 [2024-11-20 14:51:35.555737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.779 [2024-11-20 14:51:35.555744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.779 [2024-11-20 14:51:35.555908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.779 [2024-11-20 14:51:35.556100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.779 [2024-11-20 14:51:35.556109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.779 [2024-11-20 14:51:35.556116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.779 [2024-11-20 14:51:35.556123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.780 [2024-11-20 14:51:35.568215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.780 [2024-11-20 14:51:35.568596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.780 [2024-11-20 14:51:35.568640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.780 [2024-11-20 14:51:35.568664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.780 [2024-11-20 14:51:35.569127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.780 [2024-11-20 14:51:35.569301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.780 [2024-11-20 14:51:35.569310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.780 [2024-11-20 14:51:35.569317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.780 [2024-11-20 14:51:35.569324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.780 [2024-11-20 14:51:35.581050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.780 [2024-11-20 14:51:35.581465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.780 [2024-11-20 14:51:35.581481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.780 [2024-11-20 14:51:35.581489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.780 [2024-11-20 14:51:35.581652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.780 [2024-11-20 14:51:35.581816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.780 [2024-11-20 14:51:35.581824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.780 [2024-11-20 14:51:35.581830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.780 [2024-11-20 14:51:35.581836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.780 [2024-11-20 14:51:35.593881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.780 [2024-11-20 14:51:35.594305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.780 [2024-11-20 14:51:35.594322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.780 [2024-11-20 14:51:35.594330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.780 [2024-11-20 14:51:35.594507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.780 [2024-11-20 14:51:35.594681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.780 [2024-11-20 14:51:35.594690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.780 [2024-11-20 14:51:35.594697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.780 [2024-11-20 14:51:35.594703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.780 [2024-11-20 14:51:35.606809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.780 [2024-11-20 14:51:35.607236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.780 [2024-11-20 14:51:35.607281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.780 [2024-11-20 14:51:35.607305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.780 [2024-11-20 14:51:35.607758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.780 [2024-11-20 14:51:35.607932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.780 [2024-11-20 14:51:35.607941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.780 [2024-11-20 14:51:35.607952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.780 [2024-11-20 14:51:35.607960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.780 [2024-11-20 14:51:35.619667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.780 [2024-11-20 14:51:35.620091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.780 [2024-11-20 14:51:35.620107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.780 [2024-11-20 14:51:35.620128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.780 [2024-11-20 14:51:35.620714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.780 [2024-11-20 14:51:35.621235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.780 [2024-11-20 14:51:35.621245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.780 [2024-11-20 14:51:35.621252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.780 [2024-11-20 14:51:35.621258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.780 [2024-11-20 14:51:35.632593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.780 [2024-11-20 14:51:35.632972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.780 [2024-11-20 14:51:35.632989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.780 [2024-11-20 14:51:35.632996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.780 [2024-11-20 14:51:35.633160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.780 [2024-11-20 14:51:35.633325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.780 [2024-11-20 14:51:35.633335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.780 [2024-11-20 14:51:35.633342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.780 [2024-11-20 14:51:35.633348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.780 [2024-11-20 14:51:35.645467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.780 [2024-11-20 14:51:35.645899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.780 [2024-11-20 14:51:35.645945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.780 [2024-11-20 14:51:35.645985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.780 [2024-11-20 14:51:35.646571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.780 [2024-11-20 14:51:35.647082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.780 [2024-11-20 14:51:35.647090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.780 [2024-11-20 14:51:35.647097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.780 [2024-11-20 14:51:35.647104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.780 [2024-11-20 14:51:35.658279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.780 [2024-11-20 14:51:35.658626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.780 [2024-11-20 14:51:35.658641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.780 [2024-11-20 14:51:35.658649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.780 [2024-11-20 14:51:35.658812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.780 [2024-11-20 14:51:35.658997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.780 [2024-11-20 14:51:35.659006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.780 [2024-11-20 14:51:35.659013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.780 [2024-11-20 14:51:35.659019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.780 [2024-11-20 14:51:35.671197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.780 [2024-11-20 14:51:35.671530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.780 [2024-11-20 14:51:35.671583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.780 [2024-11-20 14:51:35.671607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.780 [2024-11-20 14:51:35.672140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.780 [2024-11-20 14:51:35.672319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.780 [2024-11-20 14:51:35.672328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.780 [2024-11-20 14:51:35.672336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.780 [2024-11-20 14:51:35.672348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.780 [2024-11-20 14:51:35.684049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.780 [2024-11-20 14:51:35.684497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.780 [2024-11-20 14:51:35.684514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.780 [2024-11-20 14:51:35.684521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.780 [2024-11-20 14:51:35.684695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.780 [2024-11-20 14:51:35.684880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.780 [2024-11-20 14:51:35.684888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.780 [2024-11-20 14:51:35.684895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.780 [2024-11-20 14:51:35.684901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.781 [2024-11-20 14:51:35.697005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.781 [2024-11-20 14:51:35.697430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.781 [2024-11-20 14:51:35.697473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.781 [2024-11-20 14:51:35.697497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.781 [2024-11-20 14:51:35.698077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.781 [2024-11-20 14:51:35.698251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.781 [2024-11-20 14:51:35.698260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.781 [2024-11-20 14:51:35.698267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.781 [2024-11-20 14:51:35.698274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.781 [2024-11-20 14:51:35.710147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.781 [2024-11-20 14:51:35.710571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.781 [2024-11-20 14:51:35.710589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.781 [2024-11-20 14:51:35.710597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.781 [2024-11-20 14:51:35.710777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.781 [2024-11-20 14:51:35.710961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.781 [2024-11-20 14:51:35.710971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.781 [2024-11-20 14:51:35.710979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.781 [2024-11-20 14:51:35.710986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:23.781 [2024-11-20 14:51:35.723142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:23.781 [2024-11-20 14:51:35.723555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:23.781 [2024-11-20 14:51:35.723571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:23.781 [2024-11-20 14:51:35.723578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:23.781 [2024-11-20 14:51:35.723752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:23.781 [2024-11-20 14:51:35.723925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:23.781 [2024-11-20 14:51:35.723934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:23.781 [2024-11-20 14:51:35.723941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:23.781 [2024-11-20 14:51:35.723953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.041 [2024-11-20 14:51:35.736193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.041 [2024-11-20 14:51:35.736542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.041 [2024-11-20 14:51:35.736559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.041 [2024-11-20 14:51:35.736566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.041 [2024-11-20 14:51:35.736740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.041 [2024-11-20 14:51:35.736914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.041 [2024-11-20 14:51:35.736923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.041 [2024-11-20 14:51:35.736930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.041 [2024-11-20 14:51:35.736936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.041 [2024-11-20 14:51:35.749147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.041 [2024-11-20 14:51:35.749546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.041 [2024-11-20 14:51:35.749562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.041 [2024-11-20 14:51:35.749569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.041 [2024-11-20 14:51:35.749733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.041 [2024-11-20 14:51:35.749897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.041 [2024-11-20 14:51:35.749906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.041 [2024-11-20 14:51:35.749912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.041 [2024-11-20 14:51:35.749918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.041 [2024-11-20 14:51:35.761970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.041 [2024-11-20 14:51:35.762403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.041 [2024-11-20 14:51:35.762449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.041 [2024-11-20 14:51:35.762473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.041 [2024-11-20 14:51:35.763083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.041 [2024-11-20 14:51:35.763509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.041 [2024-11-20 14:51:35.763518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.041 [2024-11-20 14:51:35.763524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.041 [2024-11-20 14:51:35.763530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.041 [2024-11-20 14:51:35.774887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.041 [2024-11-20 14:51:35.775312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.041 [2024-11-20 14:51:35.775329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.041 [2024-11-20 14:51:35.775337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.041 [2024-11-20 14:51:35.775510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.041 [2024-11-20 14:51:35.775684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.041 [2024-11-20 14:51:35.775693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.041 [2024-11-20 14:51:35.775699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.041 [2024-11-20 14:51:35.775705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.041 [2024-11-20 14:51:35.787838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.041 [2024-11-20 14:51:35.788255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.041 [2024-11-20 14:51:35.788272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.041 [2024-11-20 14:51:35.788280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.041 [2024-11-20 14:51:35.788454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.041 [2024-11-20 14:51:35.788627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.041 [2024-11-20 14:51:35.788636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.041 [2024-11-20 14:51:35.788642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.041 [2024-11-20 14:51:35.788649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.041 [2024-11-20 14:51:35.800734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.041 [2024-11-20 14:51:35.801168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.041 [2024-11-20 14:51:35.801185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.041 [2024-11-20 14:51:35.801192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.041 [2024-11-20 14:51:35.801366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.041 [2024-11-20 14:51:35.801543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.041 [2024-11-20 14:51:35.801555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.041 [2024-11-20 14:51:35.801562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.041 [2024-11-20 14:51:35.801569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.041 [2024-11-20 14:51:35.813697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.041 [2024-11-20 14:51:35.814029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.041 [2024-11-20 14:51:35.814046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.041 [2024-11-20 14:51:35.814054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.041 [2024-11-20 14:51:35.814229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.041 [2024-11-20 14:51:35.814414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.041 [2024-11-20 14:51:35.814422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.041 [2024-11-20 14:51:35.814429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.041 [2024-11-20 14:51:35.814435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.041 [2024-11-20 14:51:35.826630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.041 [2024-11-20 14:51:35.827041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.041 [2024-11-20 14:51:35.827058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.041 [2024-11-20 14:51:35.827066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.041 [2024-11-20 14:51:35.827239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.041 [2024-11-20 14:51:35.827413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.041 [2024-11-20 14:51:35.827421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.041 [2024-11-20 14:51:35.827428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.041 [2024-11-20 14:51:35.827435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.041 [2024-11-20 14:51:35.839530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.041 [2024-11-20 14:51:35.839951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.041 [2024-11-20 14:51:35.839968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.041 [2024-11-20 14:51:35.839976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.041 [2024-11-20 14:51:35.840149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.041 [2024-11-20 14:51:35.840324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.041 [2024-11-20 14:51:35.840332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.041 [2024-11-20 14:51:35.840339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.041 [2024-11-20 14:51:35.840348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.042 [2024-11-20 14:51:35.852361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.042 [2024-11-20 14:51:35.852761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.042 [2024-11-20 14:51:35.852778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.042 [2024-11-20 14:51:35.852786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.042 [2024-11-20 14:51:35.852956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.042 [2024-11-20 14:51:35.853144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.042 [2024-11-20 14:51:35.853153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.042 [2024-11-20 14:51:35.853159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.042 [2024-11-20 14:51:35.853166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.042 [2024-11-20 14:51:35.865208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.042 [2024-11-20 14:51:35.865609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.042 [2024-11-20 14:51:35.865626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.042 [2024-11-20 14:51:35.865633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.042 [2024-11-20 14:51:35.865806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.042 [2024-11-20 14:51:35.865986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.042 [2024-11-20 14:51:35.865995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.042 [2024-11-20 14:51:35.866002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.042 [2024-11-20 14:51:35.866008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.042 [2024-11-20 14:51:35.878105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.042 [2024-11-20 14:51:35.878433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.042 [2024-11-20 14:51:35.878450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.042 [2024-11-20 14:51:35.878457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.042 [2024-11-20 14:51:35.878621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.042 [2024-11-20 14:51:35.878785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.042 [2024-11-20 14:51:35.878793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.042 [2024-11-20 14:51:35.878800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.042 [2024-11-20 14:51:35.878806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.042 [2024-11-20 14:51:35.890928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.042 [2024-11-20 14:51:35.891358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.042 [2024-11-20 14:51:35.891403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.042 [2024-11-20 14:51:35.891427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.042 [2024-11-20 14:51:35.891945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.042 [2024-11-20 14:51:35.892324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.042 [2024-11-20 14:51:35.892340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.042 [2024-11-20 14:51:35.892355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.042 [2024-11-20 14:51:35.892367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.042 [2024-11-20 14:51:35.905490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.042 [2024-11-20 14:51:35.905978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.042 [2024-11-20 14:51:35.906000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.042 [2024-11-20 14:51:35.906010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.042 [2024-11-20 14:51:35.906255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.042 [2024-11-20 14:51:35.906501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.042 [2024-11-20 14:51:35.906513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.042 [2024-11-20 14:51:35.906522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.042 [2024-11-20 14:51:35.906531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.042 [2024-11-20 14:51:35.918463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.042 [2024-11-20 14:51:35.918860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.042 [2024-11-20 14:51:35.918876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.042 [2024-11-20 14:51:35.918884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.042 [2024-11-20 14:51:35.919075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.042 [2024-11-20 14:51:35.919249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.042 [2024-11-20 14:51:35.919258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.042 [2024-11-20 14:51:35.919265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.042 [2024-11-20 14:51:35.919271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.042 [2024-11-20 14:51:35.931471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.042 [2024-11-20 14:51:35.931890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.042 [2024-11-20 14:51:35.931907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.042 [2024-11-20 14:51:35.931914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.042 [2024-11-20 14:51:35.932097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.042 [2024-11-20 14:51:35.932271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.042 [2024-11-20 14:51:35.932279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.042 [2024-11-20 14:51:35.932286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.042 [2024-11-20 14:51:35.932292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.042 [2024-11-20 14:51:35.944325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.042 [2024-11-20 14:51:35.944727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.042 [2024-11-20 14:51:35.944744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.042 [2024-11-20 14:51:35.944751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.042 [2024-11-20 14:51:35.944915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.042 [2024-11-20 14:51:35.945109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.042 [2024-11-20 14:51:35.945118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.042 [2024-11-20 14:51:35.945124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.042 [2024-11-20 14:51:35.945131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.042 [2024-11-20 14:51:35.957157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.042 [2024-11-20 14:51:35.957576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.042 [2024-11-20 14:51:35.957593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.042 [2024-11-20 14:51:35.957601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.042 [2024-11-20 14:51:35.957780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.042 [2024-11-20 14:51:35.957965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.042 [2024-11-20 14:51:35.957974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.042 [2024-11-20 14:51:35.957981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.042 [2024-11-20 14:51:35.957988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.042 [2024-11-20 14:51:35.970351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.042 [2024-11-20 14:51:35.970711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.042 [2024-11-20 14:51:35.970728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.042 [2024-11-20 14:51:35.970735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.042 [2024-11-20 14:51:35.970914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.042 [2024-11-20 14:51:35.971098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.042 [2024-11-20 14:51:35.971111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.042 [2024-11-20 14:51:35.971118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.043 [2024-11-20 14:51:35.971125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.043 [2024-11-20 14:51:35.983159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.043 [2024-11-20 14:51:35.983561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.043 [2024-11-20 14:51:35.983606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.043 [2024-11-20 14:51:35.983630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.043 [2024-11-20 14:51:35.984116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.043 [2024-11-20 14:51:35.984296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.043 [2024-11-20 14:51:35.984305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.043 [2024-11-20 14:51:35.984312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.043 [2024-11-20 14:51:35.984318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.303 [2024-11-20 14:51:35.996258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.303 [2024-11-20 14:51:35.996686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.303 [2024-11-20 14:51:35.996704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.303 [2024-11-20 14:51:35.996712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.303 [2024-11-20 14:51:35.996892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.303 [2024-11-20 14:51:35.997079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.303 [2024-11-20 14:51:35.997089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.303 [2024-11-20 14:51:35.997096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.303 [2024-11-20 14:51:35.997103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.303 [2024-11-20 14:51:36.009156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.303 [2024-11-20 14:51:36.009586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.303 [2024-11-20 14:51:36.009603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.303 [2024-11-20 14:51:36.009611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.303 [2024-11-20 14:51:36.009784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.303 [2024-11-20 14:51:36.009967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.303 [2024-11-20 14:51:36.009993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.303 [2024-11-20 14:51:36.010000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.303 [2024-11-20 14:51:36.010010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.303 [2024-11-20 14:51:36.022000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.303 [2024-11-20 14:51:36.022402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.303 [2024-11-20 14:51:36.022446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.303 [2024-11-20 14:51:36.022470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.303 [2024-11-20 14:51:36.022992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.303 [2024-11-20 14:51:36.023168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.303 [2024-11-20 14:51:36.023176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.303 [2024-11-20 14:51:36.023183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.303 [2024-11-20 14:51:36.023189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.303 [2024-11-20 14:51:36.037056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.303 [2024-11-20 14:51:36.037560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.303 [2024-11-20 14:51:36.037606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.303 [2024-11-20 14:51:36.037630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.303 [2024-11-20 14:51:36.038124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.304 [2024-11-20 14:51:36.038381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.304 [2024-11-20 14:51:36.038393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.304 [2024-11-20 14:51:36.038403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.304 [2024-11-20 14:51:36.038413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.304 [2024-11-20 14:51:36.050065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.304 [2024-11-20 14:51:36.050484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.304 [2024-11-20 14:51:36.050501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.304 [2024-11-20 14:51:36.050508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.304 [2024-11-20 14:51:36.050682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.304 [2024-11-20 14:51:36.050856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.304 [2024-11-20 14:51:36.050864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.304 [2024-11-20 14:51:36.050871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.304 [2024-11-20 14:51:36.050878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.304 [2024-11-20 14:51:36.062914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.304 [2024-11-20 14:51:36.063367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.304 [2024-11-20 14:51:36.063385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.304 [2024-11-20 14:51:36.063392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.304 [2024-11-20 14:51:36.063571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.304 [2024-11-20 14:51:36.063751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.304 [2024-11-20 14:51:36.063762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.304 [2024-11-20 14:51:36.063770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.304 [2024-11-20 14:51:36.063777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.304 [2024-11-20 14:51:36.075904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.304 [2024-11-20 14:51:36.076333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.304 [2024-11-20 14:51:36.076378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.304 [2024-11-20 14:51:36.076402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.304 [2024-11-20 14:51:36.076910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.304 [2024-11-20 14:51:36.077089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.304 [2024-11-20 14:51:36.077099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.304 [2024-11-20 14:51:36.077106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.304 [2024-11-20 14:51:36.077113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.304 [2024-11-20 14:51:36.088872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.304 [2024-11-20 14:51:36.089309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.304 [2024-11-20 14:51:36.089327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.304 [2024-11-20 14:51:36.089335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.304 [2024-11-20 14:51:36.089510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.304 [2024-11-20 14:51:36.089686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.304 [2024-11-20 14:51:36.089694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.304 [2024-11-20 14:51:36.089701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.304 [2024-11-20 14:51:36.089708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.304 [2024-11-20 14:51:36.101818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.304 [2024-11-20 14:51:36.102273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.304 [2024-11-20 14:51:36.102321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.304 [2024-11-20 14:51:36.102345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.304 [2024-11-20 14:51:36.102913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.304 [2024-11-20 14:51:36.103094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.304 [2024-11-20 14:51:36.103104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.304 [2024-11-20 14:51:36.103110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.304 [2024-11-20 14:51:36.103117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.304 [2024-11-20 14:51:36.114798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.304 [2024-11-20 14:51:36.115233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.304 [2024-11-20 14:51:36.115250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.304 [2024-11-20 14:51:36.115258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.304 [2024-11-20 14:51:36.115431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.304 [2024-11-20 14:51:36.115605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.304 [2024-11-20 14:51:36.115614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.304 [2024-11-20 14:51:36.115621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.304 [2024-11-20 14:51:36.115627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.304 [2024-11-20 14:51:36.128017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.304 [2024-11-20 14:51:36.128473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.304 [2024-11-20 14:51:36.128490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.304 [2024-11-20 14:51:36.128498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.304 [2024-11-20 14:51:36.128671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.304 [2024-11-20 14:51:36.128845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.304 [2024-11-20 14:51:36.128853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.304 [2024-11-20 14:51:36.128860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.304 [2024-11-20 14:51:36.128867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.304 [2024-11-20 14:51:36.141029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.304 [2024-11-20 14:51:36.141328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.304 [2024-11-20 14:51:36.141372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.304 [2024-11-20 14:51:36.141396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.304 [2024-11-20 14:51:36.141998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.304 [2024-11-20 14:51:36.142435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.304 [2024-11-20 14:51:36.142447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.304 [2024-11-20 14:51:36.142454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.304 [2024-11-20 14:51:36.142461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.304 [2024-11-20 14:51:36.154089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.304 [2024-11-20 14:51:36.154436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.304 [2024-11-20 14:51:36.154452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.304 [2024-11-20 14:51:36.154459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.304 [2024-11-20 14:51:36.154622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.304 [2024-11-20 14:51:36.154786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.304 [2024-11-20 14:51:36.154795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.304 [2024-11-20 14:51:36.154801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.304 [2024-11-20 14:51:36.154807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.304 [2024-11-20 14:51:36.167080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.304 [2024-11-20 14:51:36.167393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.304 [2024-11-20 14:51:36.167409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.305 [2024-11-20 14:51:36.167416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.305 [2024-11-20 14:51:36.167590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.305 [2024-11-20 14:51:36.167764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.305 [2024-11-20 14:51:36.167773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.305 [2024-11-20 14:51:36.167780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.305 [2024-11-20 14:51:36.167786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.305 [2024-11-20 14:51:36.180010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.305 [2024-11-20 14:51:36.180383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.305 [2024-11-20 14:51:36.180427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.305 [2024-11-20 14:51:36.180450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.305 [2024-11-20 14:51:36.181052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.305 [2024-11-20 14:51:36.181473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.305 [2024-11-20 14:51:36.181481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.305 [2024-11-20 14:51:36.181488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.305 [2024-11-20 14:51:36.181498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.305 [2024-11-20 14:51:36.193053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.305 [2024-11-20 14:51:36.193481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.305 [2024-11-20 14:51:36.193524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.305 [2024-11-20 14:51:36.193548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.305 [2024-11-20 14:51:36.194143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.305 [2024-11-20 14:51:36.194347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.305 [2024-11-20 14:51:36.194355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.305 [2024-11-20 14:51:36.194362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.305 [2024-11-20 14:51:36.194369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.305 [2024-11-20 14:51:36.206007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.305 [2024-11-20 14:51:36.206380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.305 [2024-11-20 14:51:36.206396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.305 [2024-11-20 14:51:36.206404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.305 [2024-11-20 14:51:36.206583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.305 [2024-11-20 14:51:36.206762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.305 [2024-11-20 14:51:36.206771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.305 [2024-11-20 14:51:36.206778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.305 [2024-11-20 14:51:36.206785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.305 [2024-11-20 14:51:36.218962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.305 [2024-11-20 14:51:36.219411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.305 [2024-11-20 14:51:36.219442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.305 [2024-11-20 14:51:36.219468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.305 [2024-11-20 14:51:36.220068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.305 [2024-11-20 14:51:36.220601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.305 [2024-11-20 14:51:36.220611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.305 [2024-11-20 14:51:36.220619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.305 [2024-11-20 14:51:36.220625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.305 [2024-11-20 14:51:36.232180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.305 [2024-11-20 14:51:36.232625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.305 [2024-11-20 14:51:36.232675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.305 [2024-11-20 14:51:36.232699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.305 [2024-11-20 14:51:36.233225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.305 [2024-11-20 14:51:36.233404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.305 [2024-11-20 14:51:36.233414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.305 [2024-11-20 14:51:36.233421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.305 [2024-11-20 14:51:36.233428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.305 [2024-11-20 14:51:36.245240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.305 [2024-11-20 14:51:36.245582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.305 [2024-11-20 14:51:36.245598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.305 [2024-11-20 14:51:36.245606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.305 [2024-11-20 14:51:36.245780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.305 [2024-11-20 14:51:36.245961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.305 [2024-11-20 14:51:36.245971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.305 [2024-11-20 14:51:36.245994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.305 [2024-11-20 14:51:36.246003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.568 [2024-11-20 14:51:36.258393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.568 [2024-11-20 14:51:36.258758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.568 [2024-11-20 14:51:36.258778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.568 [2024-11-20 14:51:36.258786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.568 [2024-11-20 14:51:36.258973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.568 [2024-11-20 14:51:36.259157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.568 [2024-11-20 14:51:36.259167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.568 [2024-11-20 14:51:36.259174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.568 [2024-11-20 14:51:36.259181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.568 [2024-11-20 14:51:36.271283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.568 [2024-11-20 14:51:36.271661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.568 [2024-11-20 14:51:36.271678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.568 [2024-11-20 14:51:36.271685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.568 [2024-11-20 14:51:36.271863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.568 [2024-11-20 14:51:36.272042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.568 [2024-11-20 14:51:36.272051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.568 [2024-11-20 14:51:36.272058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.568 [2024-11-20 14:51:36.272064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.568 [2024-11-20 14:51:36.284166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.568 [2024-11-20 14:51:36.284456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.568 [2024-11-20 14:51:36.284472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.568 [2024-11-20 14:51:36.284479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.568 [2024-11-20 14:51:36.284672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.568 [2024-11-20 14:51:36.284851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.568 [2024-11-20 14:51:36.284860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.568 [2024-11-20 14:51:36.284867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.568 [2024-11-20 14:51:36.284874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.568 [2024-11-20 14:51:36.297082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.568 [2024-11-20 14:51:36.297441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.568 [2024-11-20 14:51:36.297457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.568 [2024-11-20 14:51:36.297464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.568 [2024-11-20 14:51:36.297638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.568 [2024-11-20 14:51:36.297812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.568 [2024-11-20 14:51:36.297821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.568 [2024-11-20 14:51:36.297828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.568 [2024-11-20 14:51:36.297834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.568 [2024-11-20 14:51:36.310000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.568 [2024-11-20 14:51:36.310289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.568 [2024-11-20 14:51:36.310305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.568 [2024-11-20 14:51:36.310313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.568 [2024-11-20 14:51:36.310485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.568 [2024-11-20 14:51:36.310660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.568 [2024-11-20 14:51:36.310674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.568 [2024-11-20 14:51:36.310681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.568 [2024-11-20 14:51:36.310688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.568 [2024-11-20 14:51:36.322893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.568 [2024-11-20 14:51:36.323322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.568 [2024-11-20 14:51:36.323339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.568 [2024-11-20 14:51:36.323346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.568 [2024-11-20 14:51:36.323520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.568 [2024-11-20 14:51:36.323695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.568 [2024-11-20 14:51:36.323703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.568 [2024-11-20 14:51:36.323711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.568 [2024-11-20 14:51:36.323717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.568 [2024-11-20 14:51:36.335880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.568 [2024-11-20 14:51:36.336184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.569 [2024-11-20 14:51:36.336201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.569 [2024-11-20 14:51:36.336209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.569 [2024-11-20 14:51:36.336383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.569 [2024-11-20 14:51:36.336556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.569 [2024-11-20 14:51:36.336565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.569 [2024-11-20 14:51:36.336572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.569 [2024-11-20 14:51:36.336578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.569 5650.00 IOPS, 22.07 MiB/s [2024-11-20T13:51:36.527Z] [2024-11-20 14:51:36.348811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.569 [2024-11-20 14:51:36.349201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.569 [2024-11-20 14:51:36.349235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.569 [2024-11-20 14:51:36.349260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.569 [2024-11-20 14:51:36.349822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.569 [2024-11-20 14:51:36.350001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.569 [2024-11-20 14:51:36.350010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.569 [2024-11-20 14:51:36.350017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.569 [2024-11-20 14:51:36.350028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.569 [2024-11-20 14:51:36.361828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.569 [2024-11-20 14:51:36.362193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.569 [2024-11-20 14:51:36.362210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.569 [2024-11-20 14:51:36.362217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.569 [2024-11-20 14:51:36.362391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.569 [2024-11-20 14:51:36.362566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.569 [2024-11-20 14:51:36.362575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.569 [2024-11-20 14:51:36.362582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.569 [2024-11-20 14:51:36.362589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.569 [2024-11-20 14:51:36.374894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.569 [2024-11-20 14:51:36.375187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.569 [2024-11-20 14:51:36.375204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.569 [2024-11-20 14:51:36.375212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.569 [2024-11-20 14:51:36.375385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.569 [2024-11-20 14:51:36.375558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.569 [2024-11-20 14:51:36.375567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.569 [2024-11-20 14:51:36.375574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.569 [2024-11-20 14:51:36.375580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.569 [2024-11-20 14:51:36.387951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.569 [2024-11-20 14:51:36.388362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.569 [2024-11-20 14:51:36.388406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.569 [2024-11-20 14:51:36.388430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.569 [2024-11-20 14:51:36.388943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.569 [2024-11-20 14:51:36.389126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.569 [2024-11-20 14:51:36.389135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.569 [2024-11-20 14:51:36.389141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.569 [2024-11-20 14:51:36.389148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.569 [2024-11-20 14:51:36.400874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.569 [2024-11-20 14:51:36.401320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.569 [2024-11-20 14:51:36.401358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.569 [2024-11-20 14:51:36.401383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.569 [2024-11-20 14:51:36.401928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.569 [2024-11-20 14:51:36.402107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.569 [2024-11-20 14:51:36.402117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.569 [2024-11-20 14:51:36.402124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.569 [2024-11-20 14:51:36.402130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.569 [2024-11-20 14:51:36.413930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.569 [2024-11-20 14:51:36.414293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.569 [2024-11-20 14:51:36.414310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.569 [2024-11-20 14:51:36.414318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.569 [2024-11-20 14:51:36.414492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.569 [2024-11-20 14:51:36.414667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.569 [2024-11-20 14:51:36.414676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.569 [2024-11-20 14:51:36.414683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.569 [2024-11-20 14:51:36.414690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.569 [2024-11-20 14:51:36.426806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.569 [2024-11-20 14:51:36.427141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.569 [2024-11-20 14:51:36.427158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.569 [2024-11-20 14:51:36.427166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.569 [2024-11-20 14:51:36.427339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.569 [2024-11-20 14:51:36.427513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.569 [2024-11-20 14:51:36.427521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.569 [2024-11-20 14:51:36.427528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.569 [2024-11-20 14:51:36.427535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.569 [2024-11-20 14:51:36.439658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.569 [2024-11-20 14:51:36.440113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.569 [2024-11-20 14:51:36.440131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.569 [2024-11-20 14:51:36.440142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.569 [2024-11-20 14:51:36.440316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.569 [2024-11-20 14:51:36.440490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.569 [2024-11-20 14:51:36.440499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.569 [2024-11-20 14:51:36.440506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.569 [2024-11-20 14:51:36.440512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.569 [2024-11-20 14:51:36.452554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.569 [2024-11-20 14:51:36.452996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.569 [2024-11-20 14:51:36.453013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.569 [2024-11-20 14:51:36.453021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.569 [2024-11-20 14:51:36.453195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.569 [2024-11-20 14:51:36.453373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.569 [2024-11-20 14:51:36.453381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.569 [2024-11-20 14:51:36.453387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.569 [2024-11-20 14:51:36.453393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.569 [2024-11-20 14:51:36.465465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.570 [2024-11-20 14:51:36.465909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.570 [2024-11-20 14:51:36.465926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.570 [2024-11-20 14:51:36.465933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.570 [2024-11-20 14:51:36.466111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.570 [2024-11-20 14:51:36.466285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.570 [2024-11-20 14:51:36.466294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.570 [2024-11-20 14:51:36.466300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.570 [2024-11-20 14:51:36.466307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.570 [2024-11-20 14:51:36.478634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.570 [2024-11-20 14:51:36.479073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.570 [2024-11-20 14:51:36.479118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.570 [2024-11-20 14:51:36.479144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.570 [2024-11-20 14:51:36.479729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.570 [2024-11-20 14:51:36.479956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.570 [2024-11-20 14:51:36.479968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.570 [2024-11-20 14:51:36.479976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.570 [2024-11-20 14:51:36.479983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.570 [2024-11-20 14:51:36.491769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.570 [2024-11-20 14:51:36.492133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.570 [2024-11-20 14:51:36.492150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.570 [2024-11-20 14:51:36.492157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.570 [2024-11-20 14:51:36.492336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.570 [2024-11-20 14:51:36.492515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.570 [2024-11-20 14:51:36.492523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.570 [2024-11-20 14:51:36.492530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.570 [2024-11-20 14:51:36.492537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.570 [2024-11-20 14:51:36.504888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.570 [2024-11-20 14:51:36.505250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.570 [2024-11-20 14:51:36.505266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.570 [2024-11-20 14:51:36.505274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.570 [2024-11-20 14:51:36.505447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.570 [2024-11-20 14:51:36.505622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.570 [2024-11-20 14:51:36.505630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.570 [2024-11-20 14:51:36.505637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.570 [2024-11-20 14:51:36.505643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.570 [2024-11-20 14:51:36.518004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.570 [2024-11-20 14:51:36.518457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.570 [2024-11-20 14:51:36.518504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.570 [2024-11-20 14:51:36.518528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.570 [2024-11-20 14:51:36.518975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.570 [2024-11-20 14:51:36.519152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.570 [2024-11-20 14:51:36.519161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.570 [2024-11-20 14:51:36.519168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.570 [2024-11-20 14:51:36.519178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.832 [2024-11-20 14:51:36.531189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.832 [2024-11-20 14:51:36.531647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.832 [2024-11-20 14:51:36.531693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.832 [2024-11-20 14:51:36.531717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.832 [2024-11-20 14:51:36.532235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.832 [2024-11-20 14:51:36.532401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.832 [2024-11-20 14:51:36.532410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.832 [2024-11-20 14:51:36.532416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.832 [2024-11-20 14:51:36.532422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.832 [2024-11-20 14:51:36.544066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.832 [2024-11-20 14:51:36.544493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.832 [2024-11-20 14:51:36.544509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.832 [2024-11-20 14:51:36.544516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.832 [2024-11-20 14:51:36.544680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.832 [2024-11-20 14:51:36.544844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.832 [2024-11-20 14:51:36.544852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.832 [2024-11-20 14:51:36.544858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.832 [2024-11-20 14:51:36.544864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.832 [2024-11-20 14:51:36.556896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.832 [2024-11-20 14:51:36.557326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.832 [2024-11-20 14:51:36.557371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.832 [2024-11-20 14:51:36.557396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.832 [2024-11-20 14:51:36.557944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.832 [2024-11-20 14:51:36.558123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.832 [2024-11-20 14:51:36.558132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.832 [2024-11-20 14:51:36.558140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.832 [2024-11-20 14:51:36.558146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.832 [2024-11-20 14:51:36.569764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.832 [2024-11-20 14:51:36.570094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.832 [2024-11-20 14:51:36.570110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.832 [2024-11-20 14:51:36.570118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.832 [2024-11-20 14:51:36.570291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.832 [2024-11-20 14:51:36.570465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.832 [2024-11-20 14:51:36.570474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.832 [2024-11-20 14:51:36.570481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.832 [2024-11-20 14:51:36.570488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.832 [2024-11-20 14:51:36.582644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.832 [2024-11-20 14:51:36.582973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.832 [2024-11-20 14:51:36.582990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.832 [2024-11-20 14:51:36.582997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.832 [2024-11-20 14:51:36.583162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.832 [2024-11-20 14:51:36.583325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.832 [2024-11-20 14:51:36.583333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.832 [2024-11-20 14:51:36.583340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.832 [2024-11-20 14:51:36.583346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.832 [2024-11-20 14:51:36.595484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.832 [2024-11-20 14:51:36.595911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.832 [2024-11-20 14:51:36.595969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.832 [2024-11-20 14:51:36.595994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.832 [2024-11-20 14:51:36.596461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.832 [2024-11-20 14:51:36.596635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.832 [2024-11-20 14:51:36.596644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.832 [2024-11-20 14:51:36.596651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.832 [2024-11-20 14:51:36.596657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.832 [2024-11-20 14:51:36.608386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.832 [2024-11-20 14:51:36.608781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.832 [2024-11-20 14:51:36.608797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.832 [2024-11-20 14:51:36.608807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.832 [2024-11-20 14:51:36.608992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.832 [2024-11-20 14:51:36.609166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.832 [2024-11-20 14:51:36.609175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.832 [2024-11-20 14:51:36.609198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.832 [2024-11-20 14:51:36.609205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.832 [2024-11-20 14:51:36.621313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.832 [2024-11-20 14:51:36.621731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.832 [2024-11-20 14:51:36.621748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.832 [2024-11-20 14:51:36.621755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.832 [2024-11-20 14:51:36.621929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.832 [2024-11-20 14:51:36.622111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.832 [2024-11-20 14:51:36.622121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.832 [2024-11-20 14:51:36.622128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.832 [2024-11-20 14:51:36.622135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.833 [2024-11-20 14:51:36.634169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.833 [2024-11-20 14:51:36.634574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.833 [2024-11-20 14:51:36.634618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.833 [2024-11-20 14:51:36.634643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.833 [2024-11-20 14:51:36.635243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.833 [2024-11-20 14:51:36.635465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.833 [2024-11-20 14:51:36.635473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.833 [2024-11-20 14:51:36.635481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.833 [2024-11-20 14:51:36.635487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.833 [2024-11-20 14:51:36.647092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.833 [2024-11-20 14:51:36.647419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.833 [2024-11-20 14:51:36.647435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.833 [2024-11-20 14:51:36.647442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.833 [2024-11-20 14:51:36.647606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.833 [2024-11-20 14:51:36.647771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.833 [2024-11-20 14:51:36.647781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.833 [2024-11-20 14:51:36.647788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.833 [2024-11-20 14:51:36.647794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.833 [2024-11-20 14:51:36.659990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.833 [2024-11-20 14:51:36.660385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.833 [2024-11-20 14:51:36.660401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.833 [2024-11-20 14:51:36.660408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.833 [2024-11-20 14:51:36.660581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.833 [2024-11-20 14:51:36.660755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.833 [2024-11-20 14:51:36.660764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.833 [2024-11-20 14:51:36.660771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.833 [2024-11-20 14:51:36.660777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.833 [2024-11-20 14:51:36.672839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.833 [2024-11-20 14:51:36.673246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.833 [2024-11-20 14:51:36.673263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.833 [2024-11-20 14:51:36.673270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.833 [2024-11-20 14:51:36.673443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.833 [2024-11-20 14:51:36.673617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.833 [2024-11-20 14:51:36.673626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.833 [2024-11-20 14:51:36.673633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.833 [2024-11-20 14:51:36.673639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.833 [2024-11-20 14:51:36.685807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.833 [2024-11-20 14:51:36.686197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.833 [2024-11-20 14:51:36.686214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.833 [2024-11-20 14:51:36.686222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.833 [2024-11-20 14:51:36.686400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.833 [2024-11-20 14:51:36.686585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.833 [2024-11-20 14:51:36.686594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.833 [2024-11-20 14:51:36.686601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.833 [2024-11-20 14:51:36.686610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.833 [2024-11-20 14:51:36.698736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.833 [2024-11-20 14:51:36.699160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.833 [2024-11-20 14:51:36.699178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.833 [2024-11-20 14:51:36.699186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.833 [2024-11-20 14:51:36.699360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.833 [2024-11-20 14:51:36.699534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.833 [2024-11-20 14:51:36.699542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.833 [2024-11-20 14:51:36.699549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.833 [2024-11-20 14:51:36.699555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.833 [2024-11-20 14:51:36.711649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.833 [2024-11-20 14:51:36.712064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.833 [2024-11-20 14:51:36.712081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.833 [2024-11-20 14:51:36.712089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.833 [2024-11-20 14:51:36.712254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.833 [2024-11-20 14:51:36.712420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.833 [2024-11-20 14:51:36.712428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.833 [2024-11-20 14:51:36.712435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.833 [2024-11-20 14:51:36.712440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.833 [2024-11-20 14:51:36.724662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.833 [2024-11-20 14:51:36.725022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.833 [2024-11-20 14:51:36.725038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.833 [2024-11-20 14:51:36.725046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.833 [2024-11-20 14:51:36.725219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.833 [2024-11-20 14:51:36.725394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.833 [2024-11-20 14:51:36.725403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.833 [2024-11-20 14:51:36.725409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.833 [2024-11-20 14:51:36.725416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.833 [2024-11-20 14:51:36.737576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.833 [2024-11-20 14:51:36.738067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.833 [2024-11-20 14:51:36.738084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.833 [2024-11-20 14:51:36.738091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.833 [2024-11-20 14:51:36.738270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.833 [2024-11-20 14:51:36.738451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.833 [2024-11-20 14:51:36.738459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.833 [2024-11-20 14:51:36.738467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.833 [2024-11-20 14:51:36.738473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.833 [2024-11-20 14:51:36.750747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.833 [2024-11-20 14:51:36.751369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.833 [2024-11-20 14:51:36.751418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.833 [2024-11-20 14:51:36.751442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.833 [2024-11-20 14:51:36.751903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.833 [2024-11-20 14:51:36.752089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.833 [2024-11-20 14:51:36.752099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.833 [2024-11-20 14:51:36.752106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.834 [2024-11-20 14:51:36.752114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.834 [2024-11-20 14:51:36.763807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.834 [2024-11-20 14:51:36.764235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.834 [2024-11-20 14:51:36.764252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.834 [2024-11-20 14:51:36.764260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.834 [2024-11-20 14:51:36.764434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.834 [2024-11-20 14:51:36.764607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.834 [2024-11-20 14:51:36.764616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.834 [2024-11-20 14:51:36.764623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.834 [2024-11-20 14:51:36.764629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:24.834 [2024-11-20 14:51:36.776765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:24.834 [2024-11-20 14:51:36.777185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:24.834 [2024-11-20 14:51:36.777201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:24.834 [2024-11-20 14:51:36.777212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:24.834 [2024-11-20 14:51:36.777385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:24.834 [2024-11-20 14:51:36.777559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:24.834 [2024-11-20 14:51:36.777568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:24.834 [2024-11-20 14:51:36.777574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:24.834 [2024-11-20 14:51:36.777581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.094 [2024-11-20 14:51:36.789716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.094 [2024-11-20 14:51:36.790169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.094 [2024-11-20 14:51:36.790186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.094 [2024-11-20 14:51:36.790193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.094 [2024-11-20 14:51:36.790367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.094 [2024-11-20 14:51:36.790540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.094 [2024-11-20 14:51:36.790549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.094 [2024-11-20 14:51:36.790556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.094 [2024-11-20 14:51:36.790562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.094 [2024-11-20 14:51:36.802637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.094 [2024-11-20 14:51:36.803058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.094 [2024-11-20 14:51:36.803075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.094 [2024-11-20 14:51:36.803082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.094 [2024-11-20 14:51:36.803247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.094 [2024-11-20 14:51:36.803411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.094 [2024-11-20 14:51:36.803419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.094 [2024-11-20 14:51:36.803427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.094 [2024-11-20 14:51:36.803433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.094 [2024-11-20 14:51:36.815547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.094 [2024-11-20 14:51:36.816000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.094 [2024-11-20 14:51:36.816017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.094 [2024-11-20 14:51:36.816025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.094 [2024-11-20 14:51:36.816199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.094 [2024-11-20 14:51:36.816372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.094 [2024-11-20 14:51:36.816384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.094 [2024-11-20 14:51:36.816391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.094 [2024-11-20 14:51:36.816397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.094 [2024-11-20 14:51:36.828374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.094 [2024-11-20 14:51:36.828815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.094 [2024-11-20 14:51:36.828831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.094 [2024-11-20 14:51:36.828839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.095 [2024-11-20 14:51:36.829033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.095 [2024-11-20 14:51:36.829213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.095 [2024-11-20 14:51:36.829222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.095 [2024-11-20 14:51:36.829229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.095 [2024-11-20 14:51:36.829236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.095 [2024-11-20 14:51:36.841318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.095 [2024-11-20 14:51:36.841781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.095 [2024-11-20 14:51:36.841836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.095 [2024-11-20 14:51:36.841860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.095 [2024-11-20 14:51:36.842473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.095 [2024-11-20 14:51:36.842865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.095 [2024-11-20 14:51:36.842883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.095 [2024-11-20 14:51:36.842898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.095 [2024-11-20 14:51:36.842911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.095 [2024-11-20 14:51:36.856117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.095 [2024-11-20 14:51:36.856659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.095 [2024-11-20 14:51:36.856706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.095 [2024-11-20 14:51:36.856731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.095 [2024-11-20 14:51:36.857221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.095 [2024-11-20 14:51:36.857477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.095 [2024-11-20 14:51:36.857490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.095 [2024-11-20 14:51:36.857500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.095 [2024-11-20 14:51:36.857513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.095 [2024-11-20 14:51:36.869032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.095 [2024-11-20 14:51:36.869384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.095 [2024-11-20 14:51:36.869401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.095 [2024-11-20 14:51:36.869409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.095 [2024-11-20 14:51:36.869577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.095 [2024-11-20 14:51:36.869747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.095 [2024-11-20 14:51:36.869755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.095 [2024-11-20 14:51:36.869762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.095 [2024-11-20 14:51:36.869768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.095 [2024-11-20 14:51:36.881860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.095 [2024-11-20 14:51:36.882281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.095 [2024-11-20 14:51:36.882298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.095 [2024-11-20 14:51:36.882305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.095 [2024-11-20 14:51:36.882479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.095 [2024-11-20 14:51:36.882653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.095 [2024-11-20 14:51:36.882661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.095 [2024-11-20 14:51:36.882668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.095 [2024-11-20 14:51:36.882675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.095 [2024-11-20 14:51:36.894733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1749274 Killed "${NVMF_APP[@]}" "$@" 00:32:25.095 [2024-11-20 14:51:36.895192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.095 [2024-11-20 14:51:36.895209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.095 [2024-11-20 14:51:36.895217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.095 [2024-11-20 14:51:36.895395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.095 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:32:25.095 [2024-11-20 14:51:36.895575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.095 [2024-11-20 14:51:36.895585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.095 [2024-11-20 14:51:36.895592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.095 [2024-11-20 14:51:36.895598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.095 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:25.095 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:25.095 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:25.095 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:25.095 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1750657 00:32:25.095 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1750657 00:32:25.095 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:25.095 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1750657 ']' 00:32:25.095 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.095 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:25.095 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.095 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:25.095 14:51:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:25.095 [2024-11-20 14:51:36.907832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.095 [2024-11-20 14:51:36.908253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.095 [2024-11-20 14:51:36.908269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.095 [2024-11-20 14:51:36.908276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.095 [2024-11-20 14:51:36.908455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.095 [2024-11-20 14:51:36.908634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.095 [2024-11-20 14:51:36.908643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.095 [2024-11-20 14:51:36.908650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.095 [2024-11-20 14:51:36.908657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.095 [2024-11-20 14:51:36.920928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.095 [2024-11-20 14:51:36.921365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.095 [2024-11-20 14:51:36.921382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.095 [2024-11-20 14:51:36.921390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.095 [2024-11-20 14:51:36.921569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.095 [2024-11-20 14:51:36.921748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.095 [2024-11-20 14:51:36.921756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.095 [2024-11-20 14:51:36.921763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.095 [2024-11-20 14:51:36.921770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.095 [2024-11-20 14:51:36.934116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.095 [2024-11-20 14:51:36.934534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.095 [2024-11-20 14:51:36.934552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.095 [2024-11-20 14:51:36.934560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.095 [2024-11-20 14:51:36.934738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.096 [2024-11-20 14:51:36.934918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.096 [2024-11-20 14:51:36.934926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.096 [2024-11-20 14:51:36.934933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.096 [2024-11-20 14:51:36.934940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.096 [2024-11-20 14:51:36.947128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.096 [2024-11-20 14:51:36.947552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.096 [2024-11-20 14:51:36.947570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.096 [2024-11-20 14:51:36.947578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.096 [2024-11-20 14:51:36.947758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.096 [2024-11-20 14:51:36.947938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.096 [2024-11-20 14:51:36.947953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.096 [2024-11-20 14:51:36.947961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.096 [2024-11-20 14:51:36.947967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.096 [2024-11-20 14:51:36.952100] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:32:25.096 [2024-11-20 14:51:36.952140] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:25.096 [2024-11-20 14:51:36.960226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.096 [2024-11-20 14:51:36.960637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.096 [2024-11-20 14:51:36.960654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.096 [2024-11-20 14:51:36.960662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.096 [2024-11-20 14:51:36.960841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.096 [2024-11-20 14:51:36.961122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.096 [2024-11-20 14:51:36.961133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.096 [2024-11-20 14:51:36.961141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.096 [2024-11-20 14:51:36.961148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.096 [2024-11-20 14:51:36.973329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.096 [2024-11-20 14:51:36.973770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.096 [2024-11-20 14:51:36.973787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.096 [2024-11-20 14:51:36.973795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.096 [2024-11-20 14:51:36.973979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.096 [2024-11-20 14:51:36.974163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.096 [2024-11-20 14:51:36.974174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.096 [2024-11-20 14:51:36.974182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.096 [2024-11-20 14:51:36.974189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.096 [2024-11-20 14:51:36.986441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.096 [2024-11-20 14:51:36.986883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.096 [2024-11-20 14:51:36.986901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.096 [2024-11-20 14:51:36.986909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.096 [2024-11-20 14:51:36.987094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.096 [2024-11-20 14:51:36.987273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.096 [2024-11-20 14:51:36.987282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.096 [2024-11-20 14:51:36.987289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.096 [2024-11-20 14:51:36.987296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.096 [2024-11-20 14:51:36.999570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.096 [2024-11-20 14:51:36.999937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.096 [2024-11-20 14:51:36.999960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.096 [2024-11-20 14:51:36.999968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.096 [2024-11-20 14:51:37.000148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.096 [2024-11-20 14:51:37.000327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.096 [2024-11-20 14:51:37.000336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.096 [2024-11-20 14:51:37.000344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.096 [2024-11-20 14:51:37.000351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.096 [2024-11-20 14:51:37.012793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.096 [2024-11-20 14:51:37.013236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.096 [2024-11-20 14:51:37.013256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.096 [2024-11-20 14:51:37.013264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.096 [2024-11-20 14:51:37.013443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.096 [2024-11-20 14:51:37.013622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.096 [2024-11-20 14:51:37.013631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.096 [2024-11-20 14:51:37.013638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.096 [2024-11-20 14:51:37.013645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.096 [2024-11-20 14:51:37.025931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.096 [2024-11-20 14:51:37.026376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.096 [2024-11-20 14:51:37.026393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.096 [2024-11-20 14:51:37.026401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.096 [2024-11-20 14:51:37.026580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.096 [2024-11-20 14:51:37.026759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.096 [2024-11-20 14:51:37.026768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.096 [2024-11-20 14:51:37.026775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.096 [2024-11-20 14:51:37.026782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.096 [2024-11-20 14:51:37.032779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:25.096 [2024-11-20 14:51:37.039087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.096 [2024-11-20 14:51:37.039509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.096 [2024-11-20 14:51:37.039527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.096 [2024-11-20 14:51:37.039535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.096 [2024-11-20 14:51:37.039715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.096 [2024-11-20 14:51:37.039894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.096 [2024-11-20 14:51:37.039903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.096 [2024-11-20 14:51:37.039911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.096 [2024-11-20 14:51:37.039918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.361 [2024-11-20 14:51:37.052238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.361 [2024-11-20 14:51:37.052700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.361 [2024-11-20 14:51:37.052717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.361 [2024-11-20 14:51:37.052725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.361 [2024-11-20 14:51:37.052909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.361 [2024-11-20 14:51:37.053094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.361 [2024-11-20 14:51:37.053104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.361 [2024-11-20 14:51:37.053111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.361 [2024-11-20 14:51:37.053118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.361 [2024-11-20 14:51:37.065353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.361 [2024-11-20 14:51:37.065797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.361 [2024-11-20 14:51:37.065813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.361 [2024-11-20 14:51:37.065821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.361 [2024-11-20 14:51:37.066008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.361 [2024-11-20 14:51:37.066186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.361 [2024-11-20 14:51:37.066195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.361 [2024-11-20 14:51:37.066202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.361 [2024-11-20 14:51:37.066210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.361 [2024-11-20 14:51:37.076637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:25.361 [2024-11-20 14:51:37.076662] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:25.361 [2024-11-20 14:51:37.076669] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:25.361 [2024-11-20 14:51:37.076675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:25.361 [2024-11-20 14:51:37.076680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:25.361 [2024-11-20 14:51:37.078008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:25.361 [2024-11-20 14:51:37.078114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.361 [2024-11-20 14:51:37.078114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:25.361 [2024-11-20 14:51:37.078587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.361 [2024-11-20 14:51:37.078938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.361 [2024-11-20 14:51:37.078961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.361 [2024-11-20 14:51:37.078970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.361 [2024-11-20 14:51:37.079150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.361 [2024-11-20 14:51:37.079330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.361 [2024-11-20 14:51:37.079339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.361 [2024-11-20 14:51:37.079346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.361 [2024-11-20 14:51:37.079358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.361 [2024-11-20 14:51:37.091825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.361 [2024-11-20 14:51:37.092290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.361 [2024-11-20 14:51:37.092310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.361 [2024-11-20 14:51:37.092319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.361 [2024-11-20 14:51:37.092499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.361 [2024-11-20 14:51:37.092679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.361 [2024-11-20 14:51:37.092688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.361 [2024-11-20 14:51:37.092695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.361 [2024-11-20 14:51:37.092702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.361 [2024-11-20 14:51:37.104999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.361 [2024-11-20 14:51:37.105463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.361 [2024-11-20 14:51:37.105483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.361 [2024-11-20 14:51:37.105492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.361 [2024-11-20 14:51:37.105672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.361 [2024-11-20 14:51:37.105851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.361 [2024-11-20 14:51:37.105860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.361 [2024-11-20 14:51:37.105868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.361 [2024-11-20 14:51:37.105875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.361 [2024-11-20 14:51:37.118159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.361 [2024-11-20 14:51:37.118591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.361 [2024-11-20 14:51:37.118609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.361 [2024-11-20 14:51:37.118617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.361 [2024-11-20 14:51:37.118797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.361 [2024-11-20 14:51:37.118982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.361 [2024-11-20 14:51:37.118992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.361 [2024-11-20 14:51:37.119000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.361 [2024-11-20 14:51:37.119007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.361 [2024-11-20 14:51:37.131294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.361 [2024-11-20 14:51:37.131759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.361 [2024-11-20 14:51:37.131784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.361 [2024-11-20 14:51:37.131793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.361 [2024-11-20 14:51:37.131976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.361 [2024-11-20 14:51:37.132157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.361 [2024-11-20 14:51:37.132166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.361 [2024-11-20 14:51:37.132173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.361 [2024-11-20 14:51:37.132180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.361 [2024-11-20 14:51:37.144520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.361 [2024-11-20 14:51:37.144902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.361 [2024-11-20 14:51:37.144920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.361 [2024-11-20 14:51:37.144928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.361 [2024-11-20 14:51:37.145116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.361 [2024-11-20 14:51:37.145297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.361 [2024-11-20 14:51:37.145306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.362 [2024-11-20 14:51:37.145313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.362 [2024-11-20 14:51:37.145320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.362 [2024-11-20 14:51:37.157768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.362 [2024-11-20 14:51:37.158208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.362 [2024-11-20 14:51:37.158226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.362 [2024-11-20 14:51:37.158233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.362 [2024-11-20 14:51:37.158412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.362 [2024-11-20 14:51:37.158592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.362 [2024-11-20 14:51:37.158601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.362 [2024-11-20 14:51:37.158608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.362 [2024-11-20 14:51:37.158614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:25.362 [2024-11-20 14:51:37.170886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:25.362 [2024-11-20 14:51:37.171307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.362 [2024-11-20 14:51:37.171324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.362 [2024-11-20 14:51:37.171332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.362 [2024-11-20 14:51:37.171509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.362 [2024-11-20 14:51:37.171688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.362 [2024-11-20 14:51:37.171697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.362 [2024-11-20 14:51:37.171704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.362 [2024-11-20 14:51:37.171711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.362 [2024-11-20 14:51:37.183993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.362 [2024-11-20 14:51:37.184404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.362 [2024-11-20 14:51:37.184421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.362 [2024-11-20 14:51:37.184429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.362 [2024-11-20 14:51:37.184608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.362 [2024-11-20 14:51:37.184786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.362 [2024-11-20 14:51:37.184795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.362 [2024-11-20 14:51:37.184802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.362 [2024-11-20 14:51:37.184809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.362 [2024-11-20 14:51:37.197096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.362 [2024-11-20 14:51:37.197376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.362 [2024-11-20 14:51:37.197393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.362 [2024-11-20 14:51:37.197401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.362 [2024-11-20 14:51:37.197580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.362 [2024-11-20 14:51:37.197759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.362 [2024-11-20 14:51:37.197768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.362 [2024-11-20 14:51:37.197775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.362 [2024-11-20 14:51:37.197781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:25.362 [2024-11-20 14:51:37.210234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.362 [2024-11-20 14:51:37.210529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.362 [2024-11-20 14:51:37.210545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.362 [2024-11-20 14:51:37.210554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.362 [2024-11-20 14:51:37.210732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.362 [2024-11-20 14:51:37.210912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.362 [2024-11-20 14:51:37.210921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.362 [2024-11-20 14:51:37.210928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.362 [2024-11-20 14:51:37.210935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.362 [2024-11-20 14:51:37.214862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:25.362 [2024-11-20 14:51:37.223392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.362 [2024-11-20 14:51:37.223804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.362 [2024-11-20 14:51:37.223821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.362 [2024-11-20 14:51:37.223828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.362 [2024-11-20 14:51:37.224012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.362 [2024-11-20 14:51:37.224193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.362 [2024-11-20 14:51:37.224201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.362 [2024-11-20 14:51:37.224209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.362 [2024-11-20 14:51:37.224216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.362 [2024-11-20 14:51:37.236519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.362 [2024-11-20 14:51:37.236957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.362 [2024-11-20 14:51:37.236975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.362 [2024-11-20 14:51:37.236983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.362 [2024-11-20 14:51:37.237162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.362 [2024-11-20 14:51:37.237341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.362 [2024-11-20 14:51:37.237350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.362 [2024-11-20 14:51:37.237361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.362 [2024-11-20 14:51:37.237368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.362 [2024-11-20 14:51:37.249650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.362 [2024-11-20 14:51:37.250089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.362 [2024-11-20 14:51:37.250108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.362 [2024-11-20 14:51:37.250117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.362 [2024-11-20 14:51:37.250297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.362 [2024-11-20 14:51:37.250477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.362 [2024-11-20 14:51:37.250486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.362 [2024-11-20 14:51:37.250494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.362 [2024-11-20 14:51:37.250500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.362 Malloc0 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:25.362 [2024-11-20 14:51:37.262788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.362 [2024-11-20 14:51:37.263227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.362 [2024-11-20 14:51:37.263244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.362 [2024-11-20 14:51:37.263252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.362 [2024-11-20 14:51:37.263432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.362 [2024-11-20 14:51:37.263611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.362 [2024-11-20 14:51:37.263620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.362 [2024-11-20 14:51:37.263627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.362 [2024-11-20 14:51:37.263634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:25.362 [2024-11-20 14:51:37.275911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.362 [2024-11-20 14:51:37.276330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:25.362 [2024-11-20 14:51:37.276349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e15500 with addr=10.0.0.2, port=4420 00:32:25.362 [2024-11-20 14:51:37.276357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e15500 is same with the state(6) to be set 00:32:25.362 [2024-11-20 14:51:37.276536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15500 (9): Bad file descriptor 00:32:25.362 [2024-11-20 14:51:37.276716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:32:25.362 [2024-11-20 14:51:37.276726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:32:25.362 [2024-11-20 14:51:37.276733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:32:25.362 [2024-11-20 14:51:37.276739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:32:25.362 [2024-11-20 14:51:37.277889] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.362 14:51:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1749745 00:32:25.362 [2024-11-20 14:51:37.289020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:32:25.620 4708.33 IOPS, 18.39 MiB/s [2024-11-20T13:51:37.578Z] [2024-11-20 14:51:37.364845] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:32:27.492 5593.14 IOPS, 21.85 MiB/s [2024-11-20T13:51:40.383Z] 6282.38 IOPS, 24.54 MiB/s [2024-11-20T13:51:41.761Z] 6800.44 IOPS, 26.56 MiB/s [2024-11-20T13:51:42.697Z] 7243.30 IOPS, 28.29 MiB/s [2024-11-20T13:51:43.633Z] 7604.18 IOPS, 29.70 MiB/s [2024-11-20T13:51:44.570Z] 7896.50 IOPS, 30.85 MiB/s [2024-11-20T13:51:45.506Z] 8149.00 IOPS, 31.83 MiB/s [2024-11-20T13:51:46.559Z] 8358.36 IOPS, 32.65 MiB/s [2024-11-20T13:51:46.559Z] 8537.87 IOPS, 33.35 MiB/s 00:32:34.601 Latency(us) 00:32:34.601 [2024-11-20T13:51:46.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:34.601 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:34.601 Verification LBA range: start 0x0 length 0x4000 00:32:34.601 Nvme1n1 : 15.01 8537.52 33.35 10914.64 0.00 6559.83 454.12 15728.64 00:32:34.601 [2024-11-20T13:51:46.559Z] =================================================================================================================== 00:32:34.601 [2024-11-20T13:51:46.559Z] Total : 8537.52 33.35 10914.64 0.00 6559.83 454.12 15728.64 00:32:34.601 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:32:34.601 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:34.601 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.601 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:34.601 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.601 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:34.601 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:34.601 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:34.601 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:32:34.601 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:34.601 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:32:34.601 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:34.601 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:34.601 rmmod nvme_tcp 00:32:34.881 rmmod nvme_fabrics 00:32:34.881 rmmod nvme_keyring 00:32:34.881 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:34.881 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:32:34.881 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:32:34.881 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1750657 ']' 00:32:34.881 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1750657 00:32:34.881 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1750657 ']' 00:32:34.881 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1750657 00:32:34.881 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:32:34.881 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:34.881 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1750657 00:32:34.881 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:34.881 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:34.881 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1750657' 00:32:34.881 killing process with pid 1750657 00:32:34.881 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1750657 00:32:34.881 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1750657 00:32:35.141 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:35.141 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:35.141 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:35.141 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:32:35.141 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:32:35.141 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:35.141 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:32:35.141 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:35.141 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:35.141 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:35.141 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:35.141 14:51:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:37.046 14:51:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:37.046 00:32:37.046 real 0m26.840s 00:32:37.046 user 1m3.266s 00:32:37.046 sys 0m6.777s 00:32:37.046 14:51:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:37.046 14:51:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:37.046 ************************************ 00:32:37.046 END TEST nvmf_bdevperf 00:32:37.046 ************************************ 00:32:37.046 14:51:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:37.046 14:51:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:37.046 14:51:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:37.046 14:51:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.046 ************************************ 00:32:37.046 START TEST nvmf_target_disconnect 00:32:37.046 ************************************ 00:32:37.046 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:37.306 * Looking for test storage... 00:32:37.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:37.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.306 --rc genhtml_branch_coverage=1 00:32:37.306 --rc genhtml_function_coverage=1 00:32:37.306 --rc genhtml_legend=1 00:32:37.306 --rc geninfo_all_blocks=1 00:32:37.306 --rc geninfo_unexecuted_blocks=1 00:32:37.306 00:32:37.306 ' 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:37.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.306 --rc genhtml_branch_coverage=1 00:32:37.306 --rc genhtml_function_coverage=1 00:32:37.306 --rc genhtml_legend=1 00:32:37.306 --rc geninfo_all_blocks=1 00:32:37.306 --rc geninfo_unexecuted_blocks=1 00:32:37.306 00:32:37.306 ' 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:37.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.306 --rc genhtml_branch_coverage=1 00:32:37.306 --rc genhtml_function_coverage=1 00:32:37.306 --rc genhtml_legend=1 00:32:37.306 --rc geninfo_all_blocks=1 00:32:37.306 --rc geninfo_unexecuted_blocks=1 00:32:37.306 00:32:37.306 ' 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:37.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.306 --rc genhtml_branch_coverage=1 00:32:37.306 --rc genhtml_function_coverage=1 00:32:37.306 --rc genhtml_legend=1 00:32:37.306 --rc geninfo_all_blocks=1 00:32:37.306 --rc geninfo_unexecuted_blocks=1 00:32:37.306 00:32:37.306 ' 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:37.306 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:37.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:32:37.307 14:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:43.879 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:43.879 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:43.879 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:43.880 Found net devices under 0000:86:00.0: cvl_0_0 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:43.880 Found net devices under 0000:86:00.1: cvl_0_1 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:43.880 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:43.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:43.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:32:43.880 00:32:43.880 --- 10.0.0.2 ping statistics --- 00:32:43.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.880 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:43.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:43.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:32:43.880 00:32:43.880 --- 10.0.0.1 ping statistics --- 00:32:43.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.880 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:43.880 ************************************ 00:32:43.880 START TEST nvmf_target_disconnect_tc1 00:32:43.880 ************************************ 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:43.880 [2024-11-20 14:51:55.208415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.880 [2024-11-20 14:51:55.208459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd4ab0 with addr=10.0.0.2, port=4420 00:32:43.880 [2024-11-20 14:51:55.208492] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:43.880 [2024-11-20 14:51:55.208501] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:43.880 [2024-11-20 14:51:55.208507] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:32:43.880 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:32:43.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:43.880 Initializing NVMe Controllers 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:43.880 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:43.881 00:32:43.881 real 0m0.117s 00:32:43.881 user 0m0.046s 00:32:43.881 sys 0m0.070s 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:43.881 ************************************ 00:32:43.881 END TEST nvmf_target_disconnect_tc1 00:32:43.881 ************************************ 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:43.881 ************************************ 00:32:43.881 START TEST nvmf_target_disconnect_tc2 00:32:43.881 ************************************ 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1755768 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1755768 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1755768 ']' 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:43.881 [2024-11-20 14:51:55.318278] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:32:43.881 [2024-11-20 14:51:55.318322] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.881 [2024-11-20 14:51:55.398583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:43.881 [2024-11-20 14:51:55.440960] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.881 [2024-11-20 14:51:55.441002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.881 [2024-11-20 14:51:55.441009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:43.881 [2024-11-20 14:51:55.441015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:43.881 [2024-11-20 14:51:55.441020] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.881 [2024-11-20 14:51:55.442697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:43.881 [2024-11-20 14:51:55.442805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:43.881 [2024-11-20 14:51:55.443107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:43.881 [2024-11-20 14:51:55.443108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:43.881 Malloc0 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:43.881 [2024-11-20 14:51:55.621723] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:43.881 [2024-11-20 14:51:55.653997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1755790 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:32:43.881 14:51:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:45.787 14:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1755768 00:32:45.787 14:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 [2024-11-20 14:51:57.682342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 [2024-11-20 14:51:57.682550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Read completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.787 Write completed with error (sct=0, sc=8) 00:32:45.787 starting I/O failed 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 [2024-11-20 14:51:57.682749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Write completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Write completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Write completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Write completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Write completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Write completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Write completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Write completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Write completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Write completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Write completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Write completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Write completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 Read completed with error (sct=0, sc=8) 00:32:45.788 starting I/O failed 00:32:45.788 [2024-11-20 14:51:57.682943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:45.788 [2024-11-20 14:51:57.683147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.683165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.683343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.683354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.683500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.683510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.683636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.683647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.683854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.683864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.684016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.684029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.684177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.684188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.684343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.684354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.684498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.684508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.684747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.684758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.684979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.684990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.685230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.685241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.685386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.685396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.685634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.685665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.685920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.685960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.686106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.686137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.686314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.686345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.686610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.686641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.686938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.686986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.687113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.687143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.687287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.687318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.687462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.687493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.687758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.687789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.687919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.687962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.688091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.688102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.788 [2024-11-20 14:51:57.688209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.788 [2024-11-20 14:51:57.688220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.788 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.688376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.688387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.688549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.688560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.688654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.688664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.688752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.688762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.688889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.688899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.689065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.689076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.689277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.689287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.689376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.689385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.689596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.689606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.689756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.689767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.689927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.689939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.690203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.690234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.690424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.690455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.690662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.690693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.690940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.690955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.691092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.691104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.691182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.691192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.691325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.691336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.691428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.691439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.691520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.691530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.691657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.691668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.691808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.691820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.691967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.691979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.692128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.692140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.692320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.692331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.692432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.692444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.692641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.692672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.692935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.692978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.693171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.693202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.693381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.693412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.693559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.693591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.693866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.693897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.694105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.694138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.694367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.694430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.694746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.694782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.695063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.695079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.695187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.695201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.789 [2024-11-20 14:51:57.695348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.789 [2024-11-20 14:51:57.695363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.789 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.695456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.695470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.695566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.695579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.695730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.695744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.695898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.695912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.696011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.696024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.696203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.696216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.696364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.696378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.696459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.696472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.696609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.696622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.696788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.696801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.696972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.697004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.697196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.697226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.697442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.697474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.697732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.697763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.698061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.698076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.698178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.698192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.698342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.698356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.698460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.698474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.698658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.698672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.698894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.698908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.699055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.699070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.699248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.699287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.699427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.699458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.699655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.699687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.699879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.699911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.700178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.700211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.700351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.700381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.700674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.700705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.700960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.700992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.701199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.701230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.701404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.701436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.701636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.701667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.701835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.701867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.702136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.702169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.702359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.702391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.702596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.702633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.702893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.702923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.790 [2024-11-20 14:51:57.703191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.790 [2024-11-20 14:51:57.703224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.790 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.703415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.703447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.703667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.703699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.703996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.704029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.704219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.704251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.704387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.704418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.704626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.704658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.704941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.704980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.705162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.705192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.705315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.705346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.705532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.705563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.705820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.705851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.706121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.706155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.706343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.706376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.706605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.706635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.706894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.706925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.707084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.707116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.707234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.707266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.707516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.707547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.707744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.707775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.708040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.708072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.708189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.708221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.708468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.708499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.708747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.708779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.708957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.708990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.709195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.709228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.709416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.709447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.709653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.709685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.709923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.709960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.710198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.710230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.710419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.710450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.710688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.710720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.791 qpair failed and we were unable to recover it. 00:32:45.791 [2024-11-20 14:51:57.710899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.791 [2024-11-20 14:51:57.710930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.711150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.711182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.711362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.711395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.711514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.711544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.711745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.711776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.712043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.712076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.712314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.712350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.712666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.712698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.712967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.712999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.713196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.713227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.713396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.713427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.713686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.713716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.713920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.713956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.714147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.714178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.714311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.714342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.714549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.714580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.714768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.714799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.715013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.715045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.715311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.715342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.715589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.715622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.715877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.715909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.716234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.716267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.716449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.716481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.716686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.716718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.716915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.716958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.717083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.717116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.717307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.717338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.717519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.717551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.717728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.717760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.717960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.717993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.718171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.718202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.718439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.718472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.718780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.718811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.719024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.719059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.719262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.719294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.719574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.719605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.719872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.719904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.720090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.792 [2024-11-20 14:51:57.720122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.792 qpair failed and we were unable to recover it. 00:32:45.792 [2024-11-20 14:51:57.720332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.720362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.720615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.720645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.720904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.720936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.721081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.721114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.721243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.721275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.721460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.721491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.721704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.721736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.721913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.721945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.722156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.722199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.722392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.722423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.722617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.722648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.722892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.722925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.723163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.723195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.723387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.723419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.723682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.723713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.723976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.724009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.724245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.724277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.724472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.724502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.724743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.724776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.725007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.725040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.725299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.725329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.725539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.725570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.725807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.725839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.725991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.726023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.726280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.726312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.726534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.726565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.726756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.726787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.727065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.727097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.727284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.727316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.727593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.727623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.727757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.727790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.728032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.728065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.728282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.728312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.728446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.728477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.728678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.728709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.728911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.728943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.729214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.729247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.729439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.793 [2024-11-20 14:51:57.729470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.793 qpair failed and we were unable to recover it. 00:32:45.793 [2024-11-20 14:51:57.729609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.729642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.729910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.729942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.730139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.730171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.730300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.730330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.730497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.730528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.730805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.730837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.731023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.731056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.731226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.731257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.731468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.731500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.731758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.731788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.731917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.731961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.732171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.732203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.732472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.732502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.732615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.732647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.732926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.732964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.733159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.733192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.733394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.733424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.733671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.733704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.733897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.733928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.734119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.734150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.734278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.734309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.734436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.734467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.734743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.734775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.735018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.735051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.735275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.735307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.735514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.735545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.735739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.735769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.736043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.736074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.736208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.736239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.736352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.736383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.736604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.736635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.736931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.736970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.737184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.737215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.737436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.737469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.737747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.737778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.737957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.737990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.738130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.738161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.794 [2024-11-20 14:51:57.738411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.794 [2024-11-20 14:51:57.738443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.794 qpair failed and we were unable to recover it. 00:32:45.795 [2024-11-20 14:51:57.738729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-11-20 14:51:57.738759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-11-20 14:51:57.738975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-11-20 14:51:57.739008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:45.795 [2024-11-20 14:51:57.739203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.795 [2024-11-20 14:51:57.739233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:45.795 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.739438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.739470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.739715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.739747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.739959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.739992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.740235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.740265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.740471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.740503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.740693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.740724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.740856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.740887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.741176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.741209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.741341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.741372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.741508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.741545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.741798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.741828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.742095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.742130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.742342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.742373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.742595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.742626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.742803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.742836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.742988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.743022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.743208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.743240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.743452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.743483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.743609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.743640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.743882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.743913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.744165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.744197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.744420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.744451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.744708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.070 [2024-11-20 14:51:57.744739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.070 qpair failed and we were unable to recover it. 00:32:46.070 [2024-11-20 14:51:57.745006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.745039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.745233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.745265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.745451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.745482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.745679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.745711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.745924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.745967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.746161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.746194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.746402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.746432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.746608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.746640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.746878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.746910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.747160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.747192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.747402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.747432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.747554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.747585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.747775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.747807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.748070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.748104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.748290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.748321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.748529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.748560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.748828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.748859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.749076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.749109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.749386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.749417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.749682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.749715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.749925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.749974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.750230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.750262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.750435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.750465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.750739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.750776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.751014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.751046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.751256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.751288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.751575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.751612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.751871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.751902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.752132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.752164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.752407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.752438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.752639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.752670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.752880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.071 [2024-11-20 14:51:57.752909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.071 qpair failed and we were unable to recover it. 00:32:46.071 [2024-11-20 14:51:57.753169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.753201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.753335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.753365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.753660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.753690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.753935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.753980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.754257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.754288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.754477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.754509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.754716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.754747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.754928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.754970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.755204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.755235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.755525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.755555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.755797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.755829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.756088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.756120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.756339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.756371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.756618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.756650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.756857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.756888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.757087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.757119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.757259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.757289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.757576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.757607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.757845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.757876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.758066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.758099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.758237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.758268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.758516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.758548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.758811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.758842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.759138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.759169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.759384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.759416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.759621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.759651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.759837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.759867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.760090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.760123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.760330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.760362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.760547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.760578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.760769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.760799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.761043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.761076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.761266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-11-20 14:51:57.761297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.072 qpair failed and we were unable to recover it. 00:32:46.072 [2024-11-20 14:51:57.761490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.761521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.761705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.761743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.761872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.761902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.762103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.762135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.762358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.762389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.762650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.762680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.762932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.762973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.763265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.763295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.763509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.763539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.763721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.763751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.763933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.763976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.764162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.764194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.764407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.764438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.764636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.764666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.764793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.764825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.765016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.765049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.765234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.765265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.765465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.765496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.765631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.765662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.765863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.765894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.766048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.766081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.766266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.766297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.766502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.766533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.766812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.766843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.767050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.767083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.767382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.767414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.767639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.767670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.767866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.767898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.768182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.768215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.073 qpair failed and we were unable to recover it. 00:32:46.073 [2024-11-20 14:51:57.768491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-11-20 14:51:57.768523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.768725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.768755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.768985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.769017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.769213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.769243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.769456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.769488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.769682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.769713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.769896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.769926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.770203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.770235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.770475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.770506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.770724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.770755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.770934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.770994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.771265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.771297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.771585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.771626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.771890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.771920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.772128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.772160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.772336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.772368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.772633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.772664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.772904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.772935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.773150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.773182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.773395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.773426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.773720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.773751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.773941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.773984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.774180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.774210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.774342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.774372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.774637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.774668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.774977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.775009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.775153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.775185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.775393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.775424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.775640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.775671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.775913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.775944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.074 [2024-11-20 14:51:57.776236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-11-20 14:51:57.776268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.074 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.776543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.776574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.776718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.776750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.777043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.777076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.777316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.777346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.777523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.777554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.777694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.777725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.777979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.778012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.778284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.778316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.778460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.778493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.778793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.778825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.779019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.779051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.779238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.779269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.779514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.779546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.779753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.779783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.780033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.780065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.780395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.780428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.780693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.780724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.780968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.781000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.781127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.781158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.781358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.781389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.781661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.781691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.781907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.781945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.782087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.782118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.782251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.782282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.782487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.075 [2024-11-20 14:51:57.782518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.075 qpair failed and we were unable to recover it. 00:32:46.075 [2024-11-20 14:51:57.782783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.782816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.783075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.783107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.783304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.783335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.783466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.783497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.783734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.783766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.783925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.783966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.784150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.784182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.784423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.784454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.784730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.784760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.784976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.785008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.785140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.785172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.785314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.785345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.785603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.785635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.785899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.785930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.786154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.786186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.786374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.786405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.786698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.786730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.786982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.787015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.787231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.787262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.787412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.787444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.787649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.787679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.787881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.787913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.788168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.788200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.788455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.788531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.788768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.788805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.789110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.789147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.789343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.789375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.789669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.789701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.789968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.790002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.790213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.790244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.790488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.790520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.790657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.076 [2024-11-20 14:51:57.790688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.076 qpair failed and we were unable to recover it. 00:32:46.076 [2024-11-20 14:51:57.790813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.790846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.790994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.791026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.791218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.791251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.791573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.791605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.791836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.791869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.792125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.792176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.792323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.792356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.792638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.792669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.792971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.793005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.793198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.793229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.793473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.793505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.793694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.793725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.793932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.793976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.794124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.794156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.794297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.794328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.794462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.794493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.794637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.794670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.794920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.794962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.795152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.795191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.795483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.795516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.795646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.795678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.795929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.795973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.796205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.796236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.796432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.796464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.796726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.796757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.796970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.797003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.797198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.797230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.797442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.797473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.797792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.797824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.798092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.077 [2024-11-20 14:51:57.798125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.077 qpair failed and we were unable to recover it. 00:32:46.077 [2024-11-20 14:51:57.798373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.798405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.798611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.798642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.798830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.798861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.799054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.799088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.799220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.799252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.799403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.799435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.799705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.799737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.800015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.800048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.800309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.800340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.800531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.800562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.800806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.800837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.801029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.801062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.801337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.801369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.801565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.801597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.801843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.801873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.802081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.802114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.802332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.802365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.802492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.802524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.802656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.802686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.802881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.802914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.803145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.803178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.803307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.803339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.803543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.803575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.803710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.803742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.803964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.803999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.804131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.804162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.804435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.804467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.804781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.804813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.805069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.805102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.805261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.805293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.805445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.805476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.805729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.805762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.806014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.806048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.806192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.078 [2024-11-20 14:51:57.806223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.078 qpair failed and we were unable to recover it. 00:32:46.078 [2024-11-20 14:51:57.806424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.806456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.806684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.806717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.806895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.806928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.807113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.807147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.807286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.807318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.807568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.807600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.807806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.807839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.808034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.808067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.808263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.808295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.808501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.808532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.808779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.808811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.808963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.808996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.809265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.809297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.809546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.809579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.809798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.809831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.810070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.810102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.810374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.810406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.810725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.810757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.810969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.811001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.811200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.811232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.811433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.811465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.811768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.811799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.812075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.812113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.812394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.812426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.812612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.812644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.812851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.812885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.813082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.813118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.813252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.813283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.813536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.813569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.813870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.813902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.814094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.079 [2024-11-20 14:51:57.814127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.079 qpair failed and we were unable to recover it. 00:32:46.079 [2024-11-20 14:51:57.814408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.814440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.814667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.814699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.814895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.814928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.815258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.815293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.815483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.815516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.815801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.815834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.816127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.816161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.816311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.816343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.816565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.816596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.816863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.816896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.817047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.817081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.817270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.817302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.817445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.817478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.817755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.817788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.817985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.818020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.818158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.818191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.818441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.818474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.818818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.818850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.819111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.819146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.819292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.819325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.819476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.819508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.819793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.819825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.820063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.820097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.820346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.820378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.820522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.820554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.820827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.820859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.821042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.821077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.821310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.821341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.821612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.821644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.821895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.821927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.822106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.822138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.080 qpair failed and we were unable to recover it. 00:32:46.080 [2024-11-20 14:51:57.822341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.080 [2024-11-20 14:51:57.822374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.822591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.822624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.822879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.822912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.823140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.823174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.823311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.823343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.823506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.823539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.823835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.823866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.824001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.824034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.824178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.824210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.824363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.824396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.824586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.824619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.824834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.824866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.825063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.825096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.825318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.825351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.825498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.825529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.825789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.825822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.826049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.826082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.826272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.826305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.826507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.826539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.826741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.826773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.827079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.827114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.827256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.827288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.827574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.081 [2024-11-20 14:51:57.827607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.081 qpair failed and we were unable to recover it. 00:32:46.081 [2024-11-20 14:51:57.827858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.827891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.828079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.828111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.828264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.828296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.828519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.828551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.828803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.828834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.829109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.829149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.829308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.829339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.829544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.829577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.829791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.829822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.829963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.829996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.830127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.830159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.830311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.830342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.830476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.830508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.830798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.830830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.831040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.831074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.831256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.831288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.831442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.831473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.831598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.831631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.831840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.831873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.832030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.832063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.832263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.832295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.832500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.832532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.832650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.832681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.832957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.832989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.833197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.833230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.833511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.833543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.833812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.833844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.834036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.834070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.834325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.834357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.834566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.834599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.834874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.834907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.835107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.835140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.082 [2024-11-20 14:51:57.835339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.082 [2024-11-20 14:51:57.835371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.082 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.835581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.835614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.835850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.835881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.836118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.836151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.836356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.836389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.836586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.836617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.836809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.836841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.837062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.837097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.837299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.837331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.837512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.837544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.837672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.837704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.837972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.838036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.838306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.838339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.838548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.838581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.838855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.838894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.839173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.839208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.839369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.839402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.839548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.839579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.839777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.839809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.840075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.840109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.840248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.840280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.840487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.840519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.840752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.840785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.841008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.841042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.841261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.841293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.841495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.841527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.841728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.841758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.841940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.842056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.842263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.842294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.842502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.842534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.842786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.842818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.842960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.842994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.083 [2024-11-20 14:51:57.843149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.083 [2024-11-20 14:51:57.843181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.083 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.843408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.843441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.843589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.843621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.843830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.843862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.844053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.844087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.844564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.844602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.844795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.844830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.845140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.845174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.845334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.845367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.845521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.845561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.845880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.845913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.846126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.846159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.846305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.846338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.846522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.846555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.846855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.846888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.847112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.847145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.847350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.847383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.847591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.847622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.847874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.847906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.848105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.848138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.848342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.848374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.848525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.848557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.848852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.848885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.849113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.849206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.849464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.849501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.849778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.849811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.850039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.850075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.850282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.850313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.850447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.850479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.850603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.850636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.850914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.850946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.851085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.851116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.851304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.851335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.851484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.851516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.084 [2024-11-20 14:51:57.851770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.084 [2024-11-20 14:51:57.851802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.084 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.852072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.852104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.852357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.852406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.852649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.852679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.852938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.852979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.853184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.853216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.853424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.853456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.853704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.853736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.853870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.853902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.854195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.854228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.854438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.854470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.854615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.854648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.854864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.854897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.855114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.855147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.855361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.855394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.855635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.855666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.855878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.855911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.856067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.856100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.856253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.856285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.856579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.856612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.856892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.856925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.857089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.857123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.857402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.857434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.857652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.857684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.857809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.857842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.858132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.858166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.858318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.858350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.858606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.858637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.858892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.858925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.859162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.085 [2024-11-20 14:51:57.859195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.085 qpair failed and we were unable to recover it. 00:32:46.085 [2024-11-20 14:51:57.859420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.859452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.859821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.859852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.860069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.860103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.860309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.860341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.860547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.860580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.860823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.860856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.861075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.861109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.861379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.861412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.861708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.861740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.861969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.862001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.862213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.862245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.862520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.862553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.862782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.862819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.862973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.863006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.863212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.863245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.863506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.863537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.863735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.863767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.864050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.864083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.864336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.864369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.864521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.864553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.864681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.864713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.864987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.865021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.865229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.865261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.865464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.865496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.865770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.865803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.866088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.866121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.866325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.866358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.866553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.866584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.866777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.086 [2024-11-20 14:51:57.866808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.086 qpair failed and we were unable to recover it. 00:32:46.086 [2024-11-20 14:51:57.867041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.867075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.867356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.867389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.867580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.867612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.867815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.867847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.868032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.868065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.868263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.868296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.868552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.868585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.868837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.868870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.869081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.869113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.869308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.869340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.869528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.869561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.869757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.869789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.870010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.870043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.870248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.870280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.870433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.870466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.870736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.870768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.870973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.871007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.871144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.871175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.871401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.871433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.871724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.871755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.872034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.872068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.872283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.872316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.872615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.872647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.872882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.872920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.873158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.873190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.873414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.873445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.873737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.873770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.873991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.874024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.874230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.874263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.874470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.874502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.874812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.874845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.087 [2024-11-20 14:51:57.875064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.087 [2024-11-20 14:51:57.875098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.087 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.875329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.875362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.875552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.875583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.875854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.875886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.876041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.876074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.876269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.876301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.876641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.876675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.876816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.876848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.876999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.877031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.877222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.877254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.877403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.877435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.877738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.877770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.877992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.878026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.878283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.878316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.878648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.878681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.878881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.878913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.879098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.879131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.879390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.879423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.879721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.879753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.879975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.880008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.880290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.880322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.880512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.880544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.880759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.880790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.881077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.881111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.881367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.881400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.881766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.881797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.882096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.882129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.882312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.882345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.882615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.882647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.882916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.882957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.883245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.883277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.883438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.883471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.883671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.883708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.088 qpair failed and we were unable to recover it. 00:32:46.088 [2024-11-20 14:51:57.883980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.088 [2024-11-20 14:51:57.884014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.884289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.884321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.884539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.884570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.884710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.884742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.884943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.884983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.885221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.885253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.885493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.885525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.885754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.885786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.886005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.886038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.886188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.886218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.886420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.886452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.886739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.886771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.887070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.887104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.887318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.887350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.887553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.887583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.887775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.887807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.888003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.888036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.888339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.888371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.888570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.888602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.888882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.888914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.889123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.889155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.889351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.889383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.889580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.889612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.889893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.889925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.890134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.890165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.890428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.890460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.890766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.890800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.891083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.891116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.891306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.891338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.891544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.891575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.891845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.089 [2024-11-20 14:51:57.891877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.089 qpair failed and we were unable to recover it. 00:32:46.089 [2024-11-20 14:51:57.892083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.892116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.892251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.892284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.892488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.892522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.892810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.892842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.892992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.893025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.893288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.893321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.893527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.893558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.893807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.893839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.894139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.894178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.894493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.894527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.894802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.894835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.895130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.895164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.895383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.895415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.895657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.895689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.895926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.895970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.896189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.896221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.896413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.896446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.896730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.896763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.897035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.897068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.897272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.897305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.897558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.897590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.897845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.897877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.898098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.898133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.898339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.898371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.898731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.898764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.899021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.899055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.899359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.899391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.899602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.899634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.899834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.899866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.900079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.900111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.090 [2024-11-20 14:51:57.900370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.090 [2024-11-20 14:51:57.900403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.090 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.900560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.900593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.900814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.900846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.901031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.901064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.901325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.901358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.901561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.901593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.901728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.901760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.901968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.902001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.902207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.902239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.902435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.902467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.902762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.902794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.902923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.902981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.903130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.903162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.903477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.903509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.903656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.903688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.903888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.903920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.904099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.904132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.904318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.904350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.904532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.904564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.904862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.904894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.905098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.905132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.905333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.905365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.905630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.905662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.905913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.905946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.906169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.906201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.906503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.906535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.906820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.906852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.906994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.907027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.907227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.907260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.907450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.907481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.091 [2024-11-20 14:51:57.907734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.091 [2024-11-20 14:51:57.907766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.091 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.907890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.907922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.908140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.908174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.908449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.908481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.908716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.908748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.908892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.908924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.909154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.909186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.909400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.909432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.909688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.909721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.909959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.909992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.910247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.910279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.910427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.910459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.910718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.910750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.910938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.910981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.911196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.911228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.911436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.911473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.911829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.911861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.912107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.912141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.912426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.912458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.912724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.912755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.913026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.913059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.913312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.913344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.913503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.913535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.913731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.913763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.914079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.914112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.092 [2024-11-20 14:51:57.914390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.092 [2024-11-20 14:51:57.914421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.092 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.914726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.914759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.915027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.915060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.915267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.915299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.915561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.915594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.915859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.915891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.916118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.916151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.916330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.916362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.916627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.916660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.916938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.916983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.917141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.917172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.917379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.917411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.917625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.917657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.917913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.917945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.918250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.918283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.918593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.918625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.918907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.918938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.919170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.919203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.919423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.919456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.919648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.919680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.919898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.919930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.920084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.920118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.920394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.920426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.920738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.920770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.921056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.921090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.921368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.921400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.921529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.921561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.921848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.921879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.922169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.922202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.922406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.922438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.922718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.922755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.922907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.922938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.923228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.923259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.923480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.923511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.923787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.923820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.924017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.924051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.924257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.924288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.093 qpair failed and we were unable to recover it. 00:32:46.093 [2024-11-20 14:51:57.924507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.093 [2024-11-20 14:51:57.924538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.925103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.925145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.925451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.925483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.925771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.925803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.926078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.926112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.926309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.926342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.926497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.926528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.926813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.926845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.927079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.927113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.927261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.927293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.927600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.927632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.927886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.927919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.928215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.928305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.928649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.928687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.928917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.928964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.929153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.929185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.929391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.929421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.929624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.929655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.929927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.929970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.930276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.930307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.930513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.930544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.930795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.930827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.931012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.931043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.931267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.931298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.931506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.931536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.931722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.931752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.931970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.932003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.932149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.932179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.932339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.932370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.932673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.932706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.932925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.932967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.933247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.094 [2024-11-20 14:51:57.933277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.094 qpair failed and we were unable to recover it. 00:32:46.094 [2024-11-20 14:51:57.933426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.933458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.933802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.933840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.934072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.934105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.934362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.934394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.934661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.934692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.934974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.935008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.935159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.935190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.935444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.935475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.935811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.935843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.936111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.936145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.936300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.936331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.936601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.936635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.936828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.936858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.937095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.937129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.937336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.937367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.937636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.937667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.937955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.937988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.938224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.938258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.938454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.938485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.938641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.938673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.938899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.938930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.939208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.939242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.939437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.939469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.939750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.939782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.940045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.940080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.940230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.940263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.940474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.940507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.940807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.940840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.941084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.941116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.941267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.941299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.941598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.941629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.941824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.941857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.942070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.942102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.942326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.095 [2024-11-20 14:51:57.942358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.095 qpair failed and we were unable to recover it. 00:32:46.095 [2024-11-20 14:51:57.942563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.942594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.942861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.942891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.943115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.943149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.943286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.943319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.943667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.943699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.943903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.943935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.944262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.944294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.944528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.944567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.944775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.944806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.945085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.945118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.945248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.945279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.945512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.945544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.945753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.945785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.945992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.946025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.946217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.946247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.946507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.946538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.946851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.946882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.947166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.947198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.947404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.947435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.947648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.947680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.947904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.947936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.948226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.948260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.948537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.948568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.948873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.948906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.949227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.949259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.949469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.949501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.949775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.949807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.950019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.950053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.950330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.950363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.950557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.950587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.950786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.096 [2024-11-20 14:51:57.950820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.096 qpair failed and we were unable to recover it. 00:32:46.096 [2024-11-20 14:51:57.951055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.951089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.951369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.951402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.951606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.951636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.951879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.951910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.952094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.952128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.952356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.952389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.952671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.952704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.952987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.953021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.953213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.953247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.953438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.953472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.953818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.953852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.954127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.954163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.954366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.954401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.954678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.954712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.954989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.955022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.955252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.955283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.955545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.955584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.955715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.955747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.956019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.956053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.956373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.956404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.956554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.956587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.956872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.956903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.957197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.957229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.957508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.957540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.957801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.957833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.958054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.958087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.958342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.958373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.958513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.958544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.097 [2024-11-20 14:51:57.958753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.097 [2024-11-20 14:51:57.958787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.097 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.959038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.959070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.959269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.959300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.959505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.959536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.959677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.959709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.959911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.959942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.960153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.960185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.960322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.960353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.960566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.960598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.960741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.960772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.961031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.961064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.961273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.961303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.961596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.961628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.961881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.961912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.962125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.962159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.962420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.962451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.962623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.962655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.962973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.963007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.963267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.963299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.963503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.963535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.963816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.963849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.964031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.964065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.964225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.964257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.964483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.964516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.964805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.964838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.965087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.965122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.965370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.965403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.965599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.965630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.965749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.965786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.965990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.966023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.098 qpair failed and we were unable to recover it. 00:32:46.098 [2024-11-20 14:51:57.966242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.098 [2024-11-20 14:51:57.966274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.966501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.966532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.966818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.966851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.967109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.967143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.967289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.967320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.967560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.967592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.967874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.967909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.968110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.968142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.968465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.968497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.968748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.968781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.969001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.969036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.969297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.969329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.969590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.969622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.969875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.969907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.970134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.970167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.970370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.970402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.970687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.970718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.970938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.970981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.971262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.971294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.971542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.971574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.971840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.971872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.972103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.972136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.972264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.972295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.972507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.972540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.972778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.972810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.973025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.973059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.973289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.973323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.973484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.973515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.973803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.973836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.974116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.974149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.974281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.974313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.974572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.974604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.974799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.974832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.975032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.975065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.099 [2024-11-20 14:51:57.975291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.099 [2024-11-20 14:51:57.975323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.099 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.975511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.975545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.975813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.975846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.976045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.976078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.976334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.976373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.976585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.976617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.976748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.976780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.977025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.977058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.977275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.977307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.977563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.977595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.977955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.977989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.978201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.978234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.978423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.978457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.978611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.978642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.978849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.978881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.979111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.979146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.979430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.979463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.979711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.979743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.979938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.979982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.980178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.980213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.980431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.980464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.980653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.980685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.981005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.981039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.981261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.981296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.981497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.981532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.981741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.981772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.981985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.982018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.982293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.982327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.982620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.982653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.982878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.982910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.983132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.983166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.983388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.983422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.983684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.100 [2024-11-20 14:51:57.983719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.100 qpair failed and we were unable to recover it. 00:32:46.100 [2024-11-20 14:51:57.983993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.984025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.984231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.984263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.984524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.984556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.984833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.984865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.985139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.985174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.985400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.985432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.985636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.985668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.985878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.985911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.986119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.986153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.986281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.986314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.986462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.986493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.986709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.986751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.987046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.987079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.987290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.987323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.987618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.987652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.987920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.987964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.988164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.988196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.988386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.988420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.988651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.988685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.988942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.988987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.989132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.989165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.989347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.989380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.989569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.989601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.989879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.989913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.990109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.990143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.990453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.990486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.990696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.990727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.990945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.990990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.991193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.991224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.991431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.991463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.101 qpair failed and we were unable to recover it. 00:32:46.101 [2024-11-20 14:51:57.991739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.101 [2024-11-20 14:51:57.991772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.992023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.992059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.992263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.992295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.992516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.992551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.992750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.992782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.992938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.993002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.993280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.993313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.993518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.993550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.993889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.993984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.994227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.994268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.994533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.994566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.994855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.994888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.995070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.995104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.995318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.995351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.995550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.995583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.995859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.995893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.996214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.996247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.996460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.996495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.996693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.996725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.996931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.996979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.997185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.997218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.997361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.997405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.997538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.997573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.997799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.997832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.998092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.998126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.998328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.998364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.998559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.998591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.998799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.998831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.999113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.999148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.999362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.999395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.999660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.999692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:57.999960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:57.999994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:58.000152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:58.000185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.102 [2024-11-20 14:51:58.000454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.102 [2024-11-20 14:51:58.000487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.102 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.000730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.000764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.000897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.000931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.001077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.001110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.001370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.001403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.001663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.001694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.001913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.001945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.002219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.002252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.002392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.002423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.002677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.002709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.003009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.003044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.003156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.003188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.003389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.003423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.003697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.003729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.003944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.004004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.004295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.004332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.004480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.004512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.004646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.004679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.004986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.005020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.005217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.005250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.005389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.005425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.005656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.005688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.005904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.005963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.006138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.006187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.006363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.103 [2024-11-20 14:51:58.006407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.103 qpair failed and we were unable to recover it. 00:32:46.103 [2024-11-20 14:51:58.006549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-11-20 14:51:58.006582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.104 qpair failed and we were unable to recover it. 00:32:46.104 [2024-11-20 14:51:58.006879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-11-20 14:51:58.006914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.104 qpair failed and we were unable to recover it. 00:32:46.104 [2024-11-20 14:51:58.007107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-11-20 14:51:58.007142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.104 qpair failed and we were unable to recover it. 00:32:46.104 [2024-11-20 14:51:58.007340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-11-20 14:51:58.007383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.104 qpair failed and we were unable to recover it. 00:32:46.104 [2024-11-20 14:51:58.007530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-11-20 14:51:58.007562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.104 qpair failed and we were unable to recover it. 00:32:46.104 [2024-11-20 14:51:58.007760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-11-20 14:51:58.007792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.104 qpair failed and we were unable to recover it. 00:32:46.104 [2024-11-20 14:51:58.007909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-11-20 14:51:58.007945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.104 qpair failed and we were unable to recover it. 00:32:46.104 [2024-11-20 14:51:58.008150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-11-20 14:51:58.008196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.104 qpair failed and we were unable to recover it. 00:32:46.104 [2024-11-20 14:51:58.008431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-11-20 14:51:58.008478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.104 qpair failed and we were unable to recover it. 00:32:46.104 [2024-11-20 14:51:58.008594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-11-20 14:51:58.008626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.104 qpair failed and we were unable to recover it. 00:32:46.104 [2024-11-20 14:51:58.008855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-11-20 14:51:58.008888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.104 qpair failed and we were unable to recover it. 00:32:46.104 [2024-11-20 14:51:58.009100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-11-20 14:51:58.009134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.104 qpair failed and we were unable to recover it. 00:32:46.104 [2024-11-20 14:51:58.009338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-11-20 14:51:58.009370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.104 qpair failed and we were unable to recover it. 00:32:46.104 [2024-11-20 14:51:58.009555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-11-20 14:51:58.009587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.104 qpair failed and we were unable to recover it. 00:32:46.104 [2024-11-20 14:51:58.009716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-11-20 14:51:58.009747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.104 qpair failed and we were unable to recover it. 00:32:46.104 [2024-11-20 14:51:58.010027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-11-20 14:51:58.010066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.104 qpair failed and we were unable to recover it. 00:32:46.104 [2024-11-20 14:51:58.010218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-11-20 14:51:58.010267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.104 qpair failed and we were unable to recover it. 00:32:46.104 [2024-11-20 14:51:58.010435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-11-20 14:51:58.010484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.104 qpair failed and we were unable to recover it. 00:32:46.104 [2024-11-20 14:51:58.010637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.104 [2024-11-20 14:51:58.010671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.104 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.010930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.010975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.011180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.011213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.011351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.011383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.011527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.011560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.011703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.011735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.011861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.011893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.012096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.012130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.012405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.012436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.012569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.012605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.012803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.012833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.012971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.013003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.013160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.013191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.013323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.013355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.013571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.013603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.013745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.013776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.013972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.014006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.014139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.014170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.014312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.014345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.014525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.014555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.014808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.014839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.014969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.015001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.015130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.015159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.015348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.015378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.015505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.015538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.015728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.015767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.015973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.382 [2024-11-20 14:51:58.016007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.382 qpair failed and we were unable to recover it. 00:32:46.382 [2024-11-20 14:51:58.016148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.016180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.016456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.016491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.016622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.016654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.016907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.016938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.017201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.017234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.017345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.017377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.017565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.017597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.017713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.017746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.017984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.018017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.018225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.018257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.018391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.018424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.018561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.018593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.018864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.018897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.019034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.019067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.019209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.019243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.019377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.019408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.019524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.019556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.019745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.019776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.020003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.020036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.020152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.020184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.020366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.020398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.020580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.020612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.020878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.020910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.021036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.021068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.021179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.021210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.021348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.021382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.021503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.021537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.021738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.021771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.021967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.022004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.022117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.022148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.022261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.022294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.022413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.022442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.022553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.022583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.022718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.022751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.022871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.022902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.023061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.023095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.023276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.023309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.383 [2024-11-20 14:51:58.023511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.383 [2024-11-20 14:51:58.023542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.383 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.023675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.023713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.023908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.023940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.024135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.024167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.024395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.024427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.024608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.024641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.024800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.024831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.024966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.024999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.025206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.025239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.025436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.025468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.025577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.025609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.025800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.025833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.026018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.026052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.026249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.026283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.026411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.026444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.026644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.026677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.026809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.026843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.026987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.027019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.027279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.027310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.027447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.027481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.027677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.027711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.027914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.027955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.028102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.028135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.028328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.028362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.028548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.028582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.028782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.028816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.029002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.029034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.029161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.029193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.029324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.029358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.029546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.029579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.029713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.029747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.029888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.029920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.030074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.030105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.030234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.030266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.030478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.030509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.030734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.030765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.030971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.031005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.031127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.031160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.384 [2024-11-20 14:51:58.031283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.384 [2024-11-20 14:51:58.031314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.384 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.031442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.031473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.031674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.031707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.031816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.031847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.031991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.032024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.032212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.032245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.032427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.032458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.032585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.032617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.032798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.032829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.032977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.033009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.033192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.033224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.033403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.033435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.033625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.033656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.033878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.033912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.034119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.034152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.034336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.034369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.034481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.034514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.034800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.034833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.034980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.035016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.035210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.035243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.035424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.035456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.035564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.035595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.035790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.035823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.036091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.036125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.036325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.036357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.036500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.036533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.036654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.036686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.036806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.036839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.037092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.037125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.037326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.037377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.037555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.037593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.037783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.037817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.037944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.037985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.038183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.038217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.038365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.038398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.038518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.038552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.038752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.038784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.038910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.385 [2024-11-20 14:51:58.038945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.385 qpair failed and we were unable to recover it. 00:32:46.385 [2024-11-20 14:51:58.039144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.039176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.039297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.039328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.039466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.039498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.039634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.039666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.039789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.039822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.040036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.040070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.040191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.040223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.040415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.040447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.040598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.040631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.040841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.040872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.041122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.041154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.041335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.041366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.041494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.041525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.041653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.041684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.041794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.041827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.041959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.041991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.042176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.042207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.042345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.042378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.042491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.042524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.042641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.042674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.042800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.042835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.042967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.042999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.043120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.043151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.043279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.043312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.043424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.043458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.043717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.043750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.043882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.043914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.044059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.044092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.044262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.044295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.044431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.044464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.044586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.386 [2024-11-20 14:51:58.044620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.386 qpair failed and we were unable to recover it. 00:32:46.386 [2024-11-20 14:51:58.044826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.044858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.045008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.045050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.045180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.045213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.045340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.045373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.045545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.045579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.045797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.045828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.045935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.045976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.046128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.046162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.046297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.046330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.046441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.046472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.046671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.046704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.046811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.046841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.047039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.047071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.047258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.047290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.047503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.047536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.047749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.047782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.047917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.047980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.048087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.048120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.048311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.048342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.048474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.048506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.048698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.048730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.048917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.048960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.049096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.049128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.049316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.049347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.049522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.049554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.049681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.049712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.049872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.049904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.050170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.050203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.050391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.050425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.050553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.050587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.050786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.050820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.051010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.051043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.051184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.051217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.051460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.051492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.051695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.051727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.051905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.051937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.052070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.387 [2024-11-20 14:51:58.052102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.387 qpair failed and we were unable to recover it. 00:32:46.387 [2024-11-20 14:51:58.052207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.052239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.052374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.052406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.052511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.052543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.052729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.052761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.052967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.053006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.053138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.053173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.053279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.053311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.053516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.053548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.053669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.053700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.053808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.053843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.054001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.054032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.054240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.054272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.054397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.054430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.054554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.054587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.054698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.054729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.054906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.054938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.055130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.055162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.055363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.055398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.055590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.055622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.055754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.055788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.055979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.056012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.056129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.056163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.056274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.056304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.056417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.056448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.056571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.056603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.056738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.056770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.056880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.056913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.057046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.057078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.057198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.057232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.057409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.057441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.057546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.057580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.057727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.057761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.057885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.057919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.058103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.058134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.058235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.058266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.058451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.058483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.058659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.058691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.058814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.388 [2024-11-20 14:51:58.058845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.388 qpair failed and we were unable to recover it. 00:32:46.388 [2024-11-20 14:51:58.059021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.059055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.059174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.059210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.059324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.059357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.059480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.059512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.059625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.059659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.059795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.059829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.059957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.059996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.060195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.060228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.060361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.060392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.060578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.060610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.060737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.060769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.060908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.060942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.061095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.061126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.061318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.061350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.061457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.061489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.061733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.061766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.061902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.061944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.062121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.062156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.062339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.062371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.062501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.062535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.062662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.062693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.062895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.062943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.063122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.063162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.063288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.063320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.063437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.063469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.063653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.063687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.063868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.063896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.064088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.064117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.064225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.064254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.064380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.064409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.064529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.064559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.064731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.064760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.064995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.065026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.065151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.065183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.065378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.065406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.065667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.065697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.389 [2024-11-20 14:51:58.066012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.389 [2024-11-20 14:51:58.066043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.389 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.066242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.066272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.066405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.066434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.066566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.066598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.066884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.066915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.067073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.067103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.067237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.067265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.067474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.067503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.067691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.067722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.067996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.068026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.068239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.068275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.068413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.068442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.068615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.068645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.068903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.068934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.069134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.069164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.069406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.069436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.069647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.069678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.069980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.070010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.070274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.070304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.070515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.070545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.070788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.070818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.070937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.070977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.071239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.071269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.071450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.071479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.071749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.071779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.071908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.071938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.072087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.072116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.072296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.072328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.072517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.072548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.072758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.072789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.073051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.073081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.073309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.073340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.073532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.073561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.073677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.073706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.073939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.073978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.074219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.074253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.074454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.074488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.074617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.074650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.390 qpair failed and we were unable to recover it. 00:32:46.390 [2024-11-20 14:51:58.074861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.390 [2024-11-20 14:51:58.074892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.075127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.075160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.075301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.075334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.075521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.075554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.075851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.075883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.076110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.076142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.076402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.076435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.076565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.076597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.076789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.076821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.077097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.077129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.077259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.077291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.077427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.077461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.077765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.077804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.077974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.078009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.078127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.078159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.078354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.078386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.078578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.078612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.078837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.078870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.079004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.079038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.079220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.079252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.079466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.079497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.079644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.079677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.079870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.079903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.080104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.080137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.080380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.080413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.080610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.080642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.080923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.080965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.081192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.081224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.081426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.081458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.081736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.081768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.081966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.081999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.082178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.082210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.082404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.391 [2024-11-20 14:51:58.082436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.391 qpair failed and we were unable to recover it. 00:32:46.391 [2024-11-20 14:51:58.082653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.082687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.082823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.082856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.083073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.083105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.083252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.083287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.083483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.083515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.083823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.083855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.084078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.084114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.084250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.084283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.084529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.084561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.084811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.084845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.085056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.085088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.085359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.085391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.085615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.085647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.085794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.085827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.086026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.086060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.086170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.086203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.086469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.086501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.086789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.086824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.087054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.087088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.087239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.087280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.087496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.087528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.087801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.087833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.088021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.088053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.088247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.088280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.088484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.088515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.088644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.088677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.089013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.089045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.089160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.089193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.089415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.089449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.089705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.089739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.089961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.089993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.090275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.090307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.090435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.090469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.090707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.090742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.090939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.090981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.091147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.091181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.091308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.091342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.392 [2024-11-20 14:51:58.091542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.392 [2024-11-20 14:51:58.091574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.392 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.091705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.091738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.091925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.091985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.092113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.092146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.092283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.092315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.092463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.092495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.092783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.092814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.093009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.093043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.093192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.093227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.093520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.093594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.093852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.093888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.094153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.094190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.094414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.094448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.094694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.094728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.094931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.094973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.095164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.095198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.095422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.095454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.095697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.095729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.095977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.096011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.096216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.096249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.096405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.096438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.096555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.096588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.096775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.096807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.096976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.097012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.097217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.097250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.097383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.097415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.097666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.097698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.097825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.097857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.098012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.098045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.098171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.098203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.098412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.098445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.098684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.098716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.098905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.098936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.099122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.099157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.099281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.099312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.099434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.099467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.099626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.099664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.099786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.393 [2024-11-20 14:51:58.099818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.393 qpair failed and we were unable to recover it. 00:32:46.393 [2024-11-20 14:51:58.099940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.099983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.100105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.100137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.100277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.100310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.100438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.100470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.100599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.100630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.100815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.100848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.101050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.101084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.101333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.101365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.101505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.101537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.101679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.101711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.101904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.101937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.102075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.102106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.102244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.102277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.102397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.102429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.102546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.102578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.102762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.102796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.102991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.103023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.103138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.103170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.103294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.103327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.103573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.103604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.103801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.103833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.104031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.104064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.104267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.104300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.104440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.104472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.104613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.104645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.104900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.104938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.105109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.105142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.105265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.105297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.105428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.105459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.105645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.105678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.105797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.105829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.106025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.106058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.106189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.106222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.106398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.106430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.106657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.106689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.106957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.106990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.107196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.107227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.107415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.107448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.394 qpair failed and we were unable to recover it. 00:32:46.394 [2024-11-20 14:51:58.107692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.394 [2024-11-20 14:51:58.107725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.107930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.107978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.108114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.108147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.108349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.108380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.108529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.108561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.108763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.108794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.109042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.109075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.109337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.109369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.109608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.109641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.109830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.109862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.110058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.110092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.110294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.110325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.110522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.110553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.110802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.110832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.111044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.111077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.111219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.111250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.111520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.111552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.111765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.111796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.112079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.112113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.112262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.112295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.112553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.112585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.112796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.112827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.113027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.113061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.113251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.113282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.113422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.113454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.113586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.113617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.113828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.113861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.114076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.114112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.114390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.114428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.114694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.114726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.115015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.115049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.115183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.115215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.115366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.115399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.115669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.115701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.115980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.116015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.116249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.116281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.116430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.395 [2024-11-20 14:51:58.116462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.395 qpair failed and we were unable to recover it. 00:32:46.395 [2024-11-20 14:51:58.116585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.116617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.116891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.116923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.117167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.117199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.117393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.117427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.117545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.117578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.117801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.117833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.117978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.118013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.118301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.118334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.118461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.118493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.118745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.118780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.118911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.118942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.119161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.119194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.119459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.119493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.119792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.119825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.120033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.120068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.120283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.120316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.120470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.120502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.120727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.120760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.121038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.121078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.121356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.121390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.121646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.121679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.121871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.121904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.122116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.122149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.122416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.122449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.122700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.122732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.122990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.123023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.123227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.123262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.123407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.123441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.123649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.123681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.123899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.123932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.124189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.124221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.124475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.124509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.124785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.396 [2024-11-20 14:51:58.124817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.396 qpair failed and we were unable to recover it. 00:32:46.396 [2024-11-20 14:51:58.125048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.125080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.125274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.125307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.125502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.125537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.125723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.125755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.126045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.126078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.126226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.126258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.126499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.126531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.126724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.126756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.127011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.127046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.127298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.127333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.127467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.127500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.127705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.127738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.127932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.127976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.128119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.128152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.128307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.128341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.128503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.128535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.128743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.128775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.129041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.129076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.129238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.129270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.129468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.129499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.129763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.129796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.129973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.130006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.130208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.130240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.130514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.130545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.130762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.130793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.130993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.131027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.131173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.131212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.131419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.131451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.131736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.131768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.132031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.132065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.132264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.132298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.132430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.132462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.132740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.132772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.132964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.133000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.133186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.133218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.133470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.133502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.133754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.133786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.133991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.134026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.397 qpair failed and we were unable to recover it. 00:32:46.397 [2024-11-20 14:51:58.134242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.397 [2024-11-20 14:51:58.134276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.134468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.134499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.134635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.134669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.134975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.135008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.135281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.135312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.135517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.135550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.135808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.135841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.136119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.136156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.136311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.136345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.136486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.136517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.136671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.136703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.136990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.137025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.137149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.137181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.137313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.137345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.137528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.137561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.137674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.137723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.137881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.137914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.138075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.138107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.138319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.138351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.138487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.138518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.138712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.138745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.138879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.138910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.139115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.139148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.139339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.139372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.139515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.139548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.139679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.139711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.139816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.139846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.139985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.140018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.140205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.140237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.140377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.140409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.140616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.140647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.140904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.140935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.141171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.141205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.141337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.141369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.141500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.141532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.141799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.141831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.141968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.142002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.398 [2024-11-20 14:51:58.142133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.398 [2024-11-20 14:51:58.142166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.398 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.142417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.142447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.142574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.142606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.142742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.142774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.142908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.142939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.143178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.143211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.143423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.143456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.143762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.143794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.143996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.144030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.144243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.144274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.144526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.144558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.144861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.144893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.145033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.145066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.145263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.145295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.145447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.145479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.145747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.145780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.146077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.146110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.146248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.146279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.146480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.146511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.146742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.146781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.146916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.146958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.147117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.147148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.147340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.147373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.147616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.147648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.147777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.147809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.148020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.148053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.148305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.148336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.148481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.148514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.148792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.148824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.149043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.149075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.149274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.149305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.149662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.149694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.149904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.149935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.150195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.150236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.150438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.150471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.150778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.150812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.151006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.151039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.151198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.151230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.399 qpair failed and we were unable to recover it. 00:32:46.399 [2024-11-20 14:51:58.151379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.399 [2024-11-20 14:51:58.151411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.151611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.151644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.151917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.151964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.152160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.152193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.152428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.152461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.152598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.152630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.152822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.152853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.153057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.153090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.153290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.153321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.153534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.153567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.153756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.153787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.154041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.154074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.154284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.154316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.154537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.154570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.154789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.154821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.155009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.155042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.155225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.155258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.155458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.155491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.155632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.155663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.155874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.155906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.156195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.156227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.156437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.156470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.156608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.156646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.156907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.156939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.157185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.157217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.157429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.157461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.157680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.157711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.157996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.158030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.158162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.158197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.158411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.158443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.158682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.158714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.158998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.159036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.159251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.159283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.400 [2024-11-20 14:51:58.159556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.400 [2024-11-20 14:51:58.159589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.400 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.159772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.159806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.160079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.160113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.160270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.160302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.160520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.160552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.160845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.160877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.161076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.161109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.161313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.161344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.161646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.161682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.161977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.162011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.162142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.162174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.162456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.162488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.162675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.162708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.162983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.163017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.163151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.163183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.163402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.163434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.163736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.163775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.163973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.164007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.164202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.164235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.164438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.164469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.164778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.164811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.165075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.165108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.165385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.165417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.165782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.165815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.166100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.166135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.166422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.166454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.166741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.166773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.167083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.167121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.167382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.167415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.167567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.167598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.167786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.167818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.168043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.168077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.168289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.168320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.168592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.168624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.168835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.168868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.169138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.169172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.169321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.169354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.169603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.169636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.169834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.401 [2024-11-20 14:51:58.169866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.401 qpair failed and we were unable to recover it. 00:32:46.401 [2024-11-20 14:51:58.170050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.170084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.170366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.170398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.170674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.170706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.171001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.171034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.171313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.171345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.171653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.171686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.171972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.172006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.172229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.172261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.172487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.172520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.172741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.172773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.173026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.173060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.173268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.173300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.173501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.173533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.173790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.173823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.174063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.174097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.174398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.174431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.174692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.174724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.175031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.175065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.175222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.175260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.175493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.175525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.175726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.175758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.175966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.176000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.176189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.176221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.176505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.176537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.176823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.176856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.177081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.177114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.177366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.177398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.177702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.177735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.177942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.177989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.178259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.178291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.178437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.178469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.178754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.178786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.178997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.179032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.179318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.179351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.179627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.179659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.179963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.179996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.180261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.180295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.180501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.180534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.402 qpair failed and we were unable to recover it. 00:32:46.402 [2024-11-20 14:51:58.180752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.402 [2024-11-20 14:51:58.180784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.181084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.181118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.181353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.181385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.181594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.181627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.181832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.181865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.182147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.182180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.182481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.182513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.182778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.182817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.183101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.183135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.183432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.183466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.183669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.183704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.183970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.184006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.184210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.184242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.184508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.184541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.184827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.184859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.185055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.185090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.185346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.185379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.185681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.185713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.185857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.185892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.186177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.186211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.186508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.186542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.186783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.186816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.187076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.187111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.187416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.187451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.187688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.187720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.188020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.188056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.188321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.188355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.188649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.188684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.188889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.188923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.189189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.189223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.189354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.189387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.189619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.189654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.189912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.189946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.190208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.190243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.190454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.190487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.190694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.190727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.190995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.191029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.403 [2024-11-20 14:51:58.191301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.403 [2024-11-20 14:51:58.191333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.403 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.191532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.191566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.191820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.191857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.192131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.192167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.192428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.192462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.192761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.192793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.193087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.193122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.193395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.193431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.193628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.193663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.193797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.193830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.194118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.194153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.194350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.194389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.194660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.194692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.194914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.194960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.195277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.195313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.195593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.195626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.195928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.195974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.196184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.196217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.196501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.196534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.196739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.196772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.196918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.196965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.197105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.197139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.197355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.197390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.197696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.197728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.197990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.198025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.198346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.198379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.198594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.198627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.198811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.198843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.199122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.199158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.199443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.199477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.199732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.199766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.200022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.200056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.200265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.200299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.200510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.200542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.200853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.200886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.201029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.201063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.201343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.201377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.201629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.201661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.404 [2024-11-20 14:51:58.201941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.404 [2024-11-20 14:51:58.201995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.404 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.202291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.202325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.202567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.202601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.202877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.202911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.203198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.203235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.203417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.203451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.203672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.203706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.203987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.204022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.204210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.204244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.204501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.204534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.204788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.204820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.205024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.205059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.205337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.205372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.205504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.205537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.205745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.205779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.205982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.206017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.206160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.206194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.206430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.206463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.206767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.206799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.206929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.206998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.207285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.207319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.207515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.207546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.207746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.207779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.208032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.208066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.208346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.208380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.208634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.208668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.208923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.208966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.209182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.405 [2024-11-20 14:51:58.209214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.405 qpair failed and we were unable to recover it. 00:32:46.405 [2024-11-20 14:51:58.209428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.209460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.209695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.209728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.210000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.210034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.210293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.210325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.210583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.210618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.210922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.210972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.211251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.211285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.211556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.211590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.211790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.211823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.212089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.212121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.212401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.212435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.212721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.212753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.212904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.212939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.213229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.213269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.213561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.213593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.213860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.213891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.214105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.214138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.214395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.214426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.214630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.214662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.214848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.214881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.215098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.215133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.215397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.215429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.215714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.215746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.216041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.216076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.216345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.216378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.216699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.216731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.216913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.216957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.217247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.217280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.217400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.217436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.217644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.217681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.217869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.217903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.218117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.218149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.218284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.218318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.218593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.218625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.218894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.218926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.219222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.219257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.219391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.219424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.219646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.219680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.406 [2024-11-20 14:51:58.219823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.406 [2024-11-20 14:51:58.219856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.406 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.220013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.220048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.220193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.220226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.220464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.220498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.220751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.220782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.221036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.221069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.221214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.221247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.221434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.221465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.221608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.221640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.221847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.221879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.222161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.222195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.222461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.222492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.222771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.222804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.222992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.223025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.223167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.223201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.223312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.223343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.223544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.223577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.223781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.223814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.224095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.224129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.224416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.224447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.224654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.224685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.224888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.224920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.225133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.225167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.225304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.225336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.225555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.225587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.225775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.225808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.226011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.226045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.226302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.226334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.226637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.226669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.226929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.226981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.227248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.227280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.227560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.227592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.227847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.227879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.228172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.228206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.228502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.228535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.228754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.228786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.229039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.229073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.229196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.407 [2024-11-20 14:51:58.229229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.407 qpair failed and we were unable to recover it. 00:32:46.407 [2024-11-20 14:51:58.229510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.229542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.229674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.229705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.229979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.230012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.230264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.230297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.230499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.230532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.230811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.230849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.231055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.231088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.231289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.231321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.231530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.231562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.231840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.231872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.232127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.232162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.232437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.232470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.232754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.232786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.232990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.233023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.233308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.233341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.233567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.233599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.233886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.233919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.234145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.234177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.234458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.234490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.234698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.234731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.235017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.235051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.235329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.235361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.235541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.235573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.235791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.235823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.235982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.236017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.236239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.236271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.236480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.236512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.236695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.236728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.236940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.236984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.237243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.237275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.237524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.237557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.237707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.237739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.238022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.238056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.238272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.238305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.238605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.238637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.408 qpair failed and we were unable to recover it. 00:32:46.408 [2024-11-20 14:51:58.238899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.408 [2024-11-20 14:51:58.238932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.239183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.239217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.239398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.239430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.239705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.239737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.240014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.240048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.240339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.240372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.240645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.240677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.240958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.240992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.241284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.241316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.241525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.241558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.241754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.241786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.242072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.242105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.242308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.242340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.242645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.242678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.242971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.243004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.243284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.243316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.243576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.243609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.243887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.243920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.244120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.244153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.244416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.244448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.244666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.244698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.244905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.244938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.245230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.245263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.245458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.245490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.245749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.245780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.246040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.246076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.246330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.246363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.246546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.246577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.246780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.246812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.247071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.247106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.247300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.247333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.247518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.247551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.247746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.247778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.247969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.248004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.248142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.248175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.248452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.409 [2024-11-20 14:51:58.248484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.409 qpair failed and we were unable to recover it. 00:32:46.409 [2024-11-20 14:51:58.248789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.248820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.249081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.249115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.249421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.249459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.249709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.249741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.250053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.250088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.250380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.250413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.250686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.250718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.251003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.251037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.251317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.251349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.251631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.251664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.251959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.251993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.252262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.252295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.252573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.252605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.252895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.252927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.253152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.253185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.253442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.253473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.253779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.253812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.254146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.254184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.254471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.254504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.254781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.254813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.255109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.255143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.255410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.255443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.255578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.255610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.255862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.255893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.256110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.256144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.256426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.256459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.256712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.256743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.256992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.257026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.257326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.257358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.257627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.257660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.257945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.257989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.258200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.258233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.258459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.258490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.410 qpair failed and we were unable to recover it. 00:32:46.410 [2024-11-20 14:51:58.258614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.410 [2024-11-20 14:51:58.258647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.258917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.258975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.259261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.259293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.259570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.259603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.259864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.259898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.260054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.260088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.260413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.260446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.260593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.260625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.260882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.260916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.261108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.261142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.261328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.261367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.261645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.261679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.261940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.261987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.262278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.262312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.262574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.262606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.262815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.262848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.263118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.263154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.263434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.263466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.263753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.263785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.263994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.264028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.264268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.264300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.264581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.264613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.264839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.264871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.265129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.265163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.265470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.265504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.265757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.265789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.266070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.266104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.266390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.266422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.266633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.266665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.266854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.266886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.267113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.267148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.267354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.267386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.267575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.267608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.267792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.267823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.268088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.268122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.268326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.411 [2024-11-20 14:51:58.268358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.411 qpair failed and we were unable to recover it. 00:32:46.411 [2024-11-20 14:51:58.268650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.268683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.268935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.268986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.269278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.269310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.269495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.269526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.269737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.269771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.270047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.270080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.270366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.270398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.270650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.270683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.270956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.270990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.271215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.271246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.271480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.271512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.271723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.271755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.271979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.272013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.272292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.272325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.272629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.272660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.272857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5acaf0 is same with the state(6) to be set 00:32:46.412 [2024-11-20 14:51:58.273334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.273405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.273690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.273728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.273982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.274017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.274213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.274245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.274539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.274572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.274874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.274906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.275208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.275241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.275458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.275489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.275692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.275723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.275922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.275964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.276157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.276189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.276405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.276438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.276706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.276739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.276962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.276997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.277181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.277214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.277411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.277443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.277673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.277706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.278006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.278039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.278249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.278281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.278534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.412 [2024-11-20 14:51:58.278567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.412 qpair failed and we were unable to recover it. 00:32:46.412 [2024-11-20 14:51:58.278874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.278905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.279132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.279165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.279322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.279355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.279493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.279525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.279733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.279765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.280080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.280114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.280394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.280432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.280718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.280750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.280971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.281005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.281226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.281258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.281484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.281516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.281734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.281767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.282023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.282056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.282237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.282269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.282500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.282533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.282734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.282766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.282970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.283004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.283267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.283300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.283568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.283600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.283884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.283916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.284144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.284178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.284459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.284491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.284776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.284809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.285080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.285114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.285407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.285439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.285658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.285690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.285974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.286008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.286264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.286296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.286597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.286630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.286898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.286931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.287204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.287238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.287493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.287524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.287659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.287691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.287897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.287929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.288139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.288172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.288331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.413 [2024-11-20 14:51:58.288364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.413 qpair failed and we were unable to recover it. 00:32:46.413 [2024-11-20 14:51:58.288644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.288675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.288873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.288906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.289153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.289187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.289444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.289476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.289777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.289809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.290101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.290137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.290364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.290396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.290721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.290753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.290985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.291020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.291307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.291341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.291612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.291650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.291859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.291892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.292206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.292239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.292434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.292466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.292759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.292792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.293087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.293119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.293342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.293375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.293680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.293713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.293976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.294009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.294234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.294266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.294476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.294509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.294761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.294794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.295058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.295091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.295394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.295427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.295729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.295761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.296028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.296062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.296363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.296396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.296631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.296664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.296939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.296982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.297205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.297239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.297493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.297526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.297749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.297781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.297975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.298009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.298225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.298257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.298536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.414 [2024-11-20 14:51:58.298568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.414 qpair failed and we were unable to recover it. 00:32:46.414 [2024-11-20 14:51:58.298858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.298891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.299190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.299223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.299496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.299528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.299820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.299854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.300143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.300176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.300426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.300459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.300787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.300821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.301098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.301131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.301380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.301412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.301684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.301716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.301913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.301946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.302177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.302209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.302423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.302455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.302660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.302692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.302905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.302937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.303201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.303240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.303513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.303546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.303799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.303832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.304093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.304127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.304430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.304462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.304655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.304688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.304956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.304991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.305258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.305290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.305546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.305579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.305889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.305922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.306147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.306180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.306382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.306414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.306611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.306644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.306845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.306877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.307108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.307142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.307368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.307401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.307600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.415 [2024-11-20 14:51:58.307632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.415 qpair failed and we were unable to recover it. 00:32:46.415 [2024-11-20 14:51:58.307903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.307934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.308234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.308268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.308548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.308580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.308799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.308832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.309101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.309135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.309377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.309409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.309591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.309623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.309829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.309863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.310110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.310143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.310280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.310312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.310519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.310552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.310760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.310792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.311013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.311047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.311318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.311350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.311550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.311582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.311833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.311865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.312088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.312122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.312354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.312385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.312573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.312606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.312887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.312920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.313207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.313240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.313544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.313576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.313776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.313810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.314070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.314109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.314390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.314423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.314648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.314682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.314906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.314971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.315266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.315304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.315518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.315554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.315758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.315790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.316095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.316135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.316416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.316451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.316714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.316746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.316936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.316981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.317279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.317327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.317641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.317688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.416 qpair failed and we were unable to recover it. 00:32:46.416 [2024-11-20 14:51:58.317968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.416 [2024-11-20 14:51:58.318009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.417 qpair failed and we were unable to recover it. 00:32:46.417 [2024-11-20 14:51:58.318329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.417 [2024-11-20 14:51:58.318375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.417 qpair failed and we were unable to recover it. 00:32:46.417 [2024-11-20 14:51:58.318656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.417 [2024-11-20 14:51:58.318692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.417 qpair failed and we were unable to recover it. 00:32:46.417 [2024-11-20 14:51:58.318998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.417 [2024-11-20 14:51:58.319033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.417 qpair failed and we were unable to recover it. 00:32:46.417 [2024-11-20 14:51:58.319275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.417 [2024-11-20 14:51:58.319308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.417 qpair failed and we were unable to recover it. 00:32:46.417 [2024-11-20 14:51:58.319554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.417 [2024-11-20 14:51:58.319598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.417 qpair failed and we were unable to recover it. 00:32:46.417 [2024-11-20 14:51:58.319911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.417 [2024-11-20 14:51:58.319957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.417 qpair failed and we were unable to recover it. 00:32:46.417 [2024-11-20 14:51:58.320284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.417 [2024-11-20 14:51:58.320322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.417 qpair failed and we were unable to recover it. 00:32:46.417 [2024-11-20 14:51:58.320544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.417 [2024-11-20 14:51:58.320577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.320865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.320898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.321177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.321211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.321470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.321503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.321810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.321862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.322221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.322273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.322521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.322569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.322783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.322831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.323079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.323124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.323310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.323358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.323686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.323734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.323981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.324031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.324325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.324373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.324675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.324723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.325023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.325072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.325388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.325435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.325762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.325797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.325997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.326032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.326333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.326365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.326569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.326609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.326892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.326924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.327226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.327259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.327461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.327494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.327793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.327825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.328093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.328127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.328347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.328378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.328641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.328673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.328973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.329006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.329211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.329243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.329504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.697 [2024-11-20 14:51:58.329537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.697 qpair failed and we were unable to recover it. 00:32:46.697 [2024-11-20 14:51:58.329747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.329778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.330066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.330099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.330326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.330359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.330696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.330728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.330857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.330891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.331135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.331169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.331443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.331476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.331695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.331727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.331966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.332000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.332257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.332289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.332558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.332591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.332856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.332889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.333192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.333225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.333359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.333391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.333606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.333639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.333838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.333870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.334150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.334184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.334393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.334425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.334645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.334677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.334976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.335010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.335159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.335192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.335470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.335501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.335698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.335730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.335963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.335997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.336197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.336229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.336480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.336512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.336664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.336696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.336913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.336944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.337138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.337170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.337307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.337345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.337570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.337601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.337794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.337826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.338100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.338134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.338339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.338370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.338552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.338584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.338878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.338917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.339138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.339173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.698 [2024-11-20 14:51:58.339458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.698 [2024-11-20 14:51:58.339490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.698 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.339767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.339800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.340077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.340111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.340339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.340371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.340571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.340604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.340880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.340913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.341204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.341236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.341511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.341544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.341845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.341878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.342084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.342118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.342350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.342383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.342570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.342602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.342856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.342888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.343134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.343170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.343371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.343402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.343586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.343618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.343891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.343924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.344143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.344176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.344435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.344467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.344758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.344792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.345043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.345077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.345362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.345396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.345677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.345711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.345933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.345979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.346176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.346208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.346419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.346452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.346658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.346689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.346896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.346928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.347128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.347161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.347415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.347447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.347643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.347675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.347930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.347974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.348123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.348162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.348442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.348474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.348683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.348716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.349013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.349047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.349199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.349231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.699 qpair failed and we were unable to recover it. 00:32:46.699 [2024-11-20 14:51:58.349429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.699 [2024-11-20 14:51:58.349462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.349682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.349714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.349913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.349956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.350211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.350243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.350437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.350469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.350750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.350782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.351091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.351126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.351383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.351414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.351614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.351646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.351936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.351982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.352245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.352276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.352555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.352587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.352792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.352825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.353031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.353064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.353205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.353235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.353451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.353484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.353738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.353770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.353966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.353998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.354192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.354225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.354425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.354457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.354660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.354692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.354900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.354932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.355077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.355111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.355375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.355406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.355625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.355657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.355785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.355818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.356018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.356052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.356205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.356237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.356458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.356489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.356689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.356721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.356942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.356989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.357194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.357226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.357366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.357397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.357592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.357625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.357827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.357859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.358157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.358198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.700 [2024-11-20 14:51:58.358406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.700 [2024-11-20 14:51:58.358438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.700 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.358662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.358694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.358812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.358844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.358972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.359006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.359195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.359227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.359381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.359412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.359612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.359646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.359840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.359872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.360054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.360087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.360219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.360252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.360460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.360491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.360638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.360670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.360875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.360909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.361124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.361158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.361282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.361313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.361582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.361615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.361816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.361847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.362033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.362067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.362286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.362320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.362526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.362558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.362857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.362890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.363104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.363138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.363348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.363380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.363564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.363596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.363749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.363782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.363968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.364001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.364214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.364246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.364377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.364409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.364603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.364635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.364812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.364843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.365043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.365077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.365273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.365305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.701 qpair failed and we were unable to recover it. 00:32:46.701 [2024-11-20 14:51:58.365436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.701 [2024-11-20 14:51:58.365466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.365660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.365693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.365980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.366014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.366295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.366327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.366461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.366493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.366631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.366663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.366905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.366937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.367213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.367253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.367452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.367483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.367671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.367703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.368013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.368050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.368235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.368268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.368491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.368522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.368711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.368743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.368945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.369003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.369209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.369241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.369370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.369403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.369677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.369709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.369838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.369870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.370080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.370114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.370250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.370282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.370439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.370471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.370673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.370706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.370903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.370935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.371069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.371102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.371359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.371392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.371594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.371626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.371769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.371801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.372086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.372120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.372390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.372422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.372550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.372582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.372808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.372841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.373041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.373074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.373259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.373290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.373489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.373523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.373779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.373810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.374083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.374117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.374263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.702 [2024-11-20 14:51:58.374296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.702 qpair failed and we were unable to recover it. 00:32:46.702 [2024-11-20 14:51:58.374446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.374491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.374757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.374789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.375043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.375077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.375204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.375247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.375514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.375550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.375846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.375883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.376148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.376181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.376363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.376406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.376698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.376732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.377033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.377075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.377276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.377309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.377565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.377597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.377774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.377806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.378058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.378093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.378232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.378264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.378521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.378552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.378674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.378706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.378844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.378885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.379097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.379131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.379332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.379365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.379582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.379615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.379726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.379758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.379966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.380000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.380128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.380162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.380340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.380373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.380567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.380598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.380724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.380756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.380983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.381016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.381282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.381314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.381456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.381490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.381741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.381773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.381985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.382019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.382167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.382201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.382422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.382454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.382631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.382663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.382917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.382961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.383232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.383264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.383479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.703 [2024-11-20 14:51:58.383511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.703 qpair failed and we were unable to recover it. 00:32:46.703 [2024-11-20 14:51:58.383657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.383690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.383827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.383859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.384125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.384159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.384435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.384468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.384668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.384700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.384911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.384943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.385164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.385197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.385482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.385513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.385662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.385694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.385830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.385862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.386058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.386091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.386278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.386317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.386439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.386471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.386651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.386682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.386815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.386847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.387099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.387132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.387260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.387292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.387404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.387436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.387690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.387722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.387856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.387887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.388074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.388107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.388250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.388283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.388532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.388564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.388743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.388774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.388978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.389012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.389295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.389327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.389532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.389564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.389762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.389794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.390076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.390109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.390301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.390333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.390540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.390572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.390777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.390809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.391009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.391042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.391244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.391276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.391470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.391501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.391781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.391813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.392019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.392053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.704 [2024-11-20 14:51:58.392235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.704 [2024-11-20 14:51:58.392267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.704 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.392545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.392578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.392833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.392865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.393111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.393144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.393337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.393369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.393558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.393590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.393789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.393820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.394078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.394112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.394295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.394328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.394523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.394554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.394802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.394834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.395086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.395120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.395322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.395354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.395565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.395596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.395798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.395831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.396047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.396080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.396275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.396306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.396489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.396522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.396720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.396751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.396880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.396913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.397178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.397212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.397397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.397429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.397616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.397648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.397842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.397873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.398052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.398086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.398354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.398386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.398511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.398544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.398672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.398704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.398989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.399022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.399219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.399252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.399396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.399427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.399635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.399667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.399862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.399895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.400146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.400179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.400355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.400386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.400603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.400634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.400863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.400895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.401024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.401057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.705 [2024-11-20 14:51:58.401251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.705 [2024-11-20 14:51:58.401283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.705 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.401479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.401510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.401785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.401817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.402030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.402070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.402251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.402283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.402478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.402510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.402728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.402760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.403029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.403061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.403243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.403276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.403457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.403490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.403693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.403725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.403926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.403968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.404226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.404259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.404525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.404556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.404701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.404732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.404918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.404979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.405093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.405125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.405402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.405433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.405645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.405678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.405805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.405836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.406023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.406057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.406327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.406360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.406553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.406584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.406761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.406793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.407007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.407041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.407285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.407317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.407452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.407483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.407664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.407696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.407889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.407921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.408129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.408161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.408417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.408450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.408705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.408737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.408843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.408874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.706 [2024-11-20 14:51:58.409064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.706 [2024-11-20 14:51:58.409098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.706 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.409274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.409305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.409438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.409470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.409655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.409687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.409862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.409894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.410119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.410153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.410329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.410361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.410539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.410570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.410836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.410869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.411072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.411105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.411317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.411355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.411592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.411623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.411807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.411839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.412015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.412048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.412313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.412345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.412626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.412658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.412834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.412866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.413121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.413155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.413336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.413369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.413490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.413522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.413637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.413669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.413873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.413905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.414099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.414131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.414387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.414418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.414673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.414705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.414897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.414929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.415130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.415162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.415434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.415466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.415711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.415742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.415929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.415982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.416260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.416292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.416464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.416496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.416637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.416669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.416944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.417000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.417198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.417231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.417357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.417388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.417581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.417614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.707 [2024-11-20 14:51:58.417797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.707 [2024-11-20 14:51:58.417829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.707 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.418076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.418109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.418300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.418331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.418596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.418627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.418815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.418846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.419052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.419086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.419340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.419372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.419612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.419643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.419758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.419790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.419920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.419963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.420204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.420236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.420430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.420461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.420647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.420679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.420881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.420919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.421178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.421212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.421416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.421448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.421658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.421691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.421997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.422030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.422209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.422241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.422375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.422407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.422658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.422691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.422827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.422859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.423049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.423083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.423218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.423251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.423465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.423496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.423607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.423639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.423763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.423796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.423977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.424011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.424257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.424288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.424469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.424501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.424694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.424726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.424917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.424975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.425094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.425127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.425299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.425330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.425542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.425573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.425753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.425785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.425972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.426006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.426205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.426236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.708 qpair failed and we were unable to recover it. 00:32:46.708 [2024-11-20 14:51:58.426362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.708 [2024-11-20 14:51:58.426394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.426655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.426686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.426886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.426918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.427055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.427087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.427354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.427386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.427575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.427607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.427847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.427880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.428136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.428168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.428434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.428466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.428654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.428685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.428857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.428888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.429088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.429121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.429265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.429297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.429483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.429516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.429635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.429667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.429915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.429966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.430158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.430190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.430372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.430403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.430537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.430569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.430757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.430788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.430971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.431005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.431203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.431236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.431479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.431511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.431751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.431783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.431997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.432030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.432286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.432317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.432426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.432457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.432722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.432754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.433043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.433077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.433207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.433239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.433440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.433472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.433688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.433720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.433856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.433889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.434080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.434114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.434312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.434346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.434541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.434573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.434843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.434894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.435105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.435138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.709 qpair failed and we were unable to recover it. 00:32:46.709 [2024-11-20 14:51:58.435265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.709 [2024-11-20 14:51:58.435296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.435492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.435524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.435793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.435825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.436113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.436146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.436269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.436302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.436551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.436583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.436782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.436814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.436996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.437030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.437300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.437331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.437518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.437549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.437736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.437769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.437962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.437995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.438190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.438223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.438367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.438399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.438589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.438623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.438746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.438778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.439010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.439044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.439161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.439198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.439339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.439371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.439546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.439579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.439850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.439881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.440088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.440121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.440319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.440352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.440543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.440575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.440708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.440740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.440984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.441018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.441136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.441169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.441291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.441324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.441444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.441477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.441596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.441628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.441835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.441868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.442006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.442041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.442161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.442193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.442433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.442464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.442652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.442686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.442869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.442902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.443088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.443121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.710 [2024-11-20 14:51:58.443367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.710 [2024-11-20 14:51:58.443399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.710 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.443617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.443650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.443839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.443872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.444061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.444095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.444226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.444258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.444448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.444479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.444667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.444699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.444960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.444994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.445128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.445160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.445360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.445392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.445579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.445611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.445851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.445884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.446136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.446172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.446296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.446328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.446502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.446534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.446732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.446766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.446946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.446987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.447250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.447283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.447460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.447494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.447755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.447787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.448049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.448089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.448277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.448311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.448518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.448549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.448746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.448778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.448987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.449021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.449279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.449313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.449436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.449466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.449661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.449694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.449938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.449982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.450111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.450143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.450409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.450442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.450691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.450725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.450860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.450892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.451118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.711 [2024-11-20 14:51:58.451151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.711 qpair failed and we were unable to recover it. 00:32:46.711 [2024-11-20 14:51:58.451349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.451382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.451566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.451598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.451779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.451810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.452080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.452113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.452351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.452383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.452512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.452544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.452651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.452683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.452932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.452973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.453212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.453246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.453367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.453399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.453634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.453666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.453865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.453899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.454089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.454121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.454345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.454378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.454603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.454637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.454769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.454801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.454973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.455006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.455217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.455250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.455510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.455542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.455785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.455818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.456089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.456124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.456342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.456374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.456557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.456589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.456800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.456834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.457028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.457060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.457188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.457220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.457430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.457469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.457732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.457764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.457957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.457991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.458207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.458239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.458461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.458492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.458668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.458700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.458886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.458918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.459120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.459153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.459274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.459304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.459484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.459516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.459700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.459732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.712 [2024-11-20 14:51:58.459974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.712 [2024-11-20 14:51:58.460006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.712 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.460207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.460240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.460415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.460447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.460629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.460660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.460858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.460891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.461100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.461133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.461304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.461336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.461511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.461543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.461721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.461753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.461936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.461978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.462266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.462301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.462534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.462566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.462833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.462864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.463008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.463042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.463164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.463196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.463327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.463358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.463542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.463576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.463819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.463850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.463969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.464003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.464130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.464164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.464347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.464379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.464650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.464682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.464897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.464930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.465151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.465181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.465416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.465446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.465573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.465604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.465721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.465752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.466013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.466044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.466219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.466249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.466419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.466473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.466730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.466760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.466877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.466908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.467107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.467137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.467316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.467347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.467535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.467565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.467682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.467712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.467813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.467843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.468078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.713 [2024-11-20 14:51:58.468110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.713 qpair failed and we were unable to recover it. 00:32:46.713 [2024-11-20 14:51:58.468367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.468396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.468586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.468615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.468883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.468915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.469117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.469149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.469362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.469392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.469603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.469635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.469738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.469767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.469941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.469988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.470177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.470207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.470377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.470409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.470614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.470646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.470779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.470810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.470979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.471012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.471140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.471172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.471308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.471339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.471578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.471608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.471737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.471768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.471903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.471936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.472127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.472162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.472427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.472460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.472609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.472641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.472829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.472861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.473041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.473074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.473265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.473298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.473437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.473469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.473587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.473619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.473731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.473763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.474016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.474049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.474268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.474300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.474498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.474531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.474729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.474760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.474990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.475030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.475159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.475192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.475462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.475494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.475703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.475735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.475915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.475956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.476138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.476169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.714 [2024-11-20 14:51:58.476384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.714 [2024-11-20 14:51:58.476415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.714 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.476539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.476571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.476830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.476862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.477050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.477083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.477274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.477307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.477496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.477528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.477630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.477662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.477797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.477829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.478096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.478130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.478304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.478337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.478541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.478572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.478677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.478709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.478962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.478997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.479199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.479234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.479471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.479503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.479617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.479649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.479891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.479923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.480189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.480222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.480483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.480515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.480759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.480792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.480986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.481021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.481198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.481232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.481405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.481437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.481674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.481706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.481836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.481868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.482113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.482146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.482411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.482444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.482684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.482716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.482906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.482938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.483085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.483117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.483322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.483355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.483648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.483680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.483813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.483846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.715 [2024-11-20 14:51:58.483973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.715 [2024-11-20 14:51:58.484006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.715 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.484126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.484164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.484355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.484387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.484625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.484658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.484905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.484944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.485175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.485207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.485404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.485437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.485614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.485644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.485828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.485860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.486062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.486096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.486225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.486259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.486450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.486482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.486586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.486625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.486842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.486875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.487055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.487088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.487231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.487263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.487443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.487475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.487709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.487742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.487870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.487903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.488114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.488149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.488320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.488352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.488454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.488486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.488749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.488782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.488991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.489023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.489208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.489240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.489425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.489457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.489700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.489732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.489921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.489964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.490207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.490283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.490516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.490560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.490709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.490755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.490905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.490963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.491267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.491308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.491548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.491586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.491878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.491916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.492191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.492232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.492451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.492489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.492707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.716 [2024-11-20 14:51:58.492744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.716 qpair failed and we were unable to recover it. 00:32:46.716 [2024-11-20 14:51:58.492969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.493009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.493275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.493313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.493519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.493557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.493766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.493813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.494041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.494081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.494314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.494351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.494489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.494521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.494707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.494739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.494870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.494902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.495034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.495068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.495187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.495219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.495334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.495367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.495548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.495581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.495704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.495736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.495854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.495886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.496082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.496116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.496302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.496334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.496468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.496500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.496673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.496707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.496881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.496913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.497090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.497123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.497334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.497368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.497480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.497511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.497703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.497736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.497919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.497981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.498103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.498136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.498305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.498339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.498530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.498561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.498743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.498775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.498969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.499002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.499190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.499223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.499468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.499500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.499740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.499771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.499968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.500001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.500249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.500281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.500452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.500483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.500589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.500622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.500749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.717 [2024-11-20 14:51:58.500783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.717 qpair failed and we were unable to recover it. 00:32:46.717 [2024-11-20 14:51:58.501052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.501086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.501254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.501286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.501412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.501444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.501616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.501647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.501904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.501936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.502068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.502107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.502315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.502346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.502643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.502675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.502880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.502911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.503035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.503068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.503196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.503227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.503331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.503363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.503620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.503651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.503834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.503868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.503986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.504018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.504147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.504179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.504350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.504383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.504689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.504722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.504904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.504936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.505201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.505235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.505367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.505400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.505601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.505632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.505815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.505846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.506023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.506057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.506242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.506274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.506512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.506544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.506663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.506694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.506874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.506905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.507024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.507057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.507244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.507277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.507447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.507478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.507644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.507676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.507883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.507915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.508157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.508212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.508448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.508520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.508757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.508793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.508922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.508964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.509157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.718 [2024-11-20 14:51:58.509189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.718 qpair failed and we were unable to recover it. 00:32:46.718 [2024-11-20 14:51:58.509451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.509481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.509665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.509697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.509803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.509834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.510015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.510047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.510259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.510291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.510481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.510511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.510681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.510713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.510847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.510888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.511102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.511134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.511319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.511350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.511476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.511506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.511685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.511716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.511902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.511934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.512204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.512235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.512352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.512383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.512509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.512539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.512718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.512749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.512988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.513020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.513207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.513239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.513357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.513389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.513577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.513608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.513876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.513908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.514107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.514139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.514309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.514340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.514637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.514669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.514933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.514975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.515180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.515211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.515397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.515428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.515552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.515584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.515755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.515786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.516025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.516058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.516248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.516281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.516461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.516492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.516621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.516652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.516752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.516789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.516969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.517001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.517288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.517320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.719 [2024-11-20 14:51:58.517518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.719 [2024-11-20 14:51:58.517550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.719 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.517671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.517703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.517885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.517917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.518106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.518138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.518330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.518361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.518594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.518624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.518817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.518847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.519145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.519177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.519413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.519445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.519613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.519643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.519881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.519912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.520055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.520089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.520329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.520360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.520561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.520592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.520765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.520798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.521058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.521090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.521331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.521364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.521501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.521531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.521739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.521771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.521900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.521930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.522133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.522165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.522351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.522382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.522616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.522648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.522779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.522809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.523011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.523044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.523219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.523249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.523441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.523474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.523682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.523715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.523834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.523865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.523984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.524015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.524278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.524309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.524496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.720 [2024-11-20 14:51:58.524528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.720 qpair failed and we were unable to recover it. 00:32:46.720 [2024-11-20 14:51:58.524768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.524799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.525041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.525074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.525201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.525232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.525440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.525471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.525586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.525616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.525825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.525860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.526036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.526068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.526245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.526276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.526389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.526419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.526588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.526617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.526855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.526886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.527066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.527098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.527213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.527243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.527501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.527531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.527721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.527752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.527920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.527959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.528071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.528100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.528315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.528345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.528510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.528543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.528737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.528768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.528936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.528979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.529090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.529122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.529313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.529344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.529451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.529481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.529746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.529778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.529884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.529914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.530103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.530136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.530316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.530347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.530456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.530486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.530723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.530755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.530919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.530981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.531174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.531205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.531448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.531480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.531671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.531702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.531833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.531864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.532059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.532090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.532204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.532234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.532342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.721 [2024-11-20 14:51:58.532372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.721 qpair failed and we were unable to recover it. 00:32:46.721 [2024-11-20 14:51:58.532489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.532521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.532692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.532723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.532909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.532941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.533133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.533164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.533337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.533368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.533605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.533635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.533757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.533789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.534109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.534146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.534287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.534319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.534425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.534454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.534656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.534686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.534882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.534911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.535170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.535203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.535374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.535405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.535660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.535691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.535823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.535853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.536056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.536088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.536201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.536232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.536427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.536457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.536693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.536723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.536839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.536871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.537145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.537179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.537351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.537381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.537499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.537528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.537717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.537748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.537863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.537893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.538094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.538127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.538297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.538326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.538499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.538529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.538733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.538765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.539004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.539036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.539283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.539315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.539437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.539467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.539654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.539685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.539998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.540031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.540222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.540253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.540437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.540468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.722 [2024-11-20 14:51:58.540597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.722 [2024-11-20 14:51:58.540628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.722 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.540817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.540847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.541025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.541057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.541322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.541354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.541489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.541520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.541757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.541789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.541919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.541959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.542222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.542253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.542378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.542406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.542578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.542609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.542795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.542833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.543101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.543132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.543328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.543359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.543590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.543621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.543870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.543903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.544123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.544155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.544277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.544309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.544439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.544468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.544644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.544675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.544800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.544830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.544941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.544984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.545240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.545272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.545441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.545473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.545647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.545677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.545961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.545994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.546206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.546237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.546429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.546461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.546647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.546678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.546776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.546807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.546931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.546970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.547153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.547182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.547444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.547476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.547666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.547696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.547816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.547848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.548057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.548090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.548376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.548406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.548702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.548733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.723 [2024-11-20 14:51:58.548926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.723 [2024-11-20 14:51:58.548969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.723 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.549149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.549181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.549359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.549392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.549675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.549705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.549834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.549867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.550076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.550107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.550250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.550281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.550403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.550435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.550645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.550676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.550773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.550804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.550934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.550975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.551086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.551120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.551309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.551364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.551697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.551763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.552061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.552134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.552289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.552326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.552514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.552603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.552907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.552969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.553114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.553156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.553359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.553395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.553518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.553561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.553700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.553741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.554034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.554075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.554188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.554220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.554326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.554356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.554571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.554602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.554789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.554821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.555038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.555071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.555336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.555367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.555474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.555506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.555765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.555796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.555983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.556016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.556142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.724 [2024-11-20 14:51:58.556174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.724 qpair failed and we were unable to recover it. 00:32:46.724 [2024-11-20 14:51:58.556384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.556416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.556594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.556626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.556835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.556867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.556975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.557007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.557134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.557163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.557352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.557384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.557645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.557677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.557915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.557979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.558116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.558149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.558412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.558444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.558681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.558712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.558915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.558958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.559077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.559110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.559227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.559258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.559519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.559551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.559759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.559791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.560054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.560087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.560276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.560308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.560499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.560531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.560800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.560832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.560967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.561000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.561181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.561213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.561344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.561376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.561513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.561544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.561657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.561689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.561878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.561913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.562040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.562072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.562322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.562354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.562473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.562505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.562752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.562783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.563037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.563070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.563255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.563287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.563480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.563512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.563631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.563662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.563763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.563801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.564015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.564049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.564233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.725 [2024-11-20 14:51:58.564267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.725 qpair failed and we were unable to recover it. 00:32:46.725 [2024-11-20 14:51:58.564477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.564509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.564680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.564711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.564896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.564927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.565112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.565144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.565312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.565343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.565514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.565546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.565736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.565768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.565962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.566001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.566286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.566320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.566510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.566541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.566733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.566764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.566961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.566995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.567125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.567157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.567338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.567368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.567484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.567515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.567789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.567821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.568058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.568091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.568262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.568294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.568395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.568426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.568693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.568724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.568828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.568859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.569060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.569092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.569194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.569225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.569495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.569526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.569706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.569737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.569937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.569977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.570148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.570179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.570454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.570484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.570668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.570699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.570890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.570922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.571105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.571136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.571319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.571350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.571478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.571510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.571694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.571725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.571992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.572025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.572217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.572247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.572369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.572400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.572506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.726 [2024-11-20 14:51:58.572537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:46.726 qpair failed and we were unable to recover it. 00:32:46.726 [2024-11-20 14:51:58.572769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.572840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.573057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.573096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.573340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.573373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.573543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.573575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.573765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.573795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.573900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.573932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.574064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.574094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.574356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.574388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.574626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.574658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.574756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.574787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.574919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.574959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.575078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.575109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.575283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.575316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.575511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.575543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.575818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.575850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.576031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.576064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.576252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.576284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.576480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.576512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.576758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.576790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.576927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.576967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.577145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.577177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.577282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.577314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.577577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.577610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.577742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.577773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.577970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.578004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.578136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.578168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.578280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.578311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.578577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.578610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.578780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.578812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.578929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.578969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.579202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.579234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.579496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.579528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.579665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.579697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.579828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.579860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.580124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.580158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.580345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.580376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.580640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.580671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.580821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.727 [2024-11-20 14:51:58.580852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.727 qpair failed and we were unable to recover it. 00:32:46.727 [2024-11-20 14:51:58.581093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.581126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.581318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.581350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.581590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.581628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.581756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.581787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.581985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.582018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.582273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.582304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.582495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.582528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.582694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.582726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.582989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.583022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.583285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.583317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.583452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.583483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.583734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.583765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.583879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.583911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.584169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.584201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.584440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.584472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.584604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.584636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.584769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.584801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.585006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.585039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.585301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.585333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.585518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.585550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.585732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.585763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.586003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.586036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.586296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.586328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.586525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.586557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.586774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.586806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.586985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.587017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.587148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.587180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.587418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.587450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.587653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.587684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.587862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.587894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.588096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.588128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.588298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.588330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.588505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.588537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.588655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.588687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.588874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.588906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.589102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.589134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.589334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.589365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.589617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.728 [2024-11-20 14:51:58.589649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.728 qpair failed and we were unable to recover it. 00:32:46.728 [2024-11-20 14:51:58.589818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.589850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.590029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.590063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.590253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.590285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.590468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.590499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.590775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.590812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.590936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.590977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.591088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.591119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.591372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.591404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.591592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.591623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.591808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.591840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.592054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.592087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.592320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.592352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.592479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.592511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.592692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.592723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.592968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.593001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.593186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.593218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.593477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.593508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.593635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.593666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.593884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.593917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.594090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.594123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.594312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.594344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.594608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.594640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.594808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.594839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.594970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.595003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.595182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.595212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.595406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.595438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.595636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.595668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.595786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.595817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.595922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.595971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.596155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.596185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.596364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.596397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.596643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.596675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.596877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.729 [2024-11-20 14:51:58.596909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.729 qpair failed and we were unable to recover it. 00:32:46.729 [2024-11-20 14:51:58.597112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.597145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.597322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.597354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.597590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.597622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.597904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.597937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.598142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.598175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.598358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.598390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.598557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.598589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.598768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.598800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.598907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.598939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.599134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.599167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.599348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.599380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.599616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.599653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.599784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.599816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.599929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.599982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.600108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.600140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.600243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.600274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.600538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.600570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.600739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.600771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.600965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.600998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.601126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.601159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.601403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.601434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.601604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.601635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.601805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.601838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.602032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.602065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.602246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.602278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.602480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.602511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.602698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.602730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.602902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.602933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.603130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.603162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.603343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.603375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.603494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.603526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.603647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.603678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.603852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.603884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.604142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.604174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.604437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.604469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.604639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.604671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.604890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.604921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.730 [2024-11-20 14:51:58.605047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.730 [2024-11-20 14:51:58.605079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.730 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.605351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.605383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.605648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.605680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.605875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.605906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.606157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.606190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.606371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.606403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.606606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.606638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.606883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.606915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.607114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.607147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.607399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.607431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.607534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.607565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.607764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.607796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.607982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.608015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.608268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.608301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.608482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.608521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.608740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.608771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.608986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.609019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.609211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.609244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.609371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.609403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.609572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.609604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.609869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.609901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.610098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.610131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.610317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.610349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.610482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.610513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.610634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.610666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.610904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.610936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.611191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.611222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.611410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.611442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.611570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.611602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.611788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.611820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.612063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.612097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.612281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.612313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.612502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.612533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.612715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.612746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.612931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.612973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.613164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.613196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.613318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.613349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.613599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.613632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.731 [2024-11-20 14:51:58.613869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.731 [2024-11-20 14:51:58.613900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.731 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.614080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.614111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.614350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.614381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.614503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.614536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.614706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.614738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.614870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.614902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.615113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.615145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.615334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.615365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.615500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.615532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.615648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.615680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.615868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.615898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.616048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.616080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.616272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.616304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.616477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.616508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.616706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.616737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.616940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.616985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.617172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.617210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.617346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.617378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.617503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.617534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.617789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.617821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.617993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.618026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.618144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.618175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.618436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.618467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.618639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.618671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.618909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.618941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.619188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.619220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.619464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.619496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.619702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.619734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.619917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.619968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.620165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.620197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.620465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.620497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.620624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.620656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.620841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.620873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.621010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.621043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.621222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.621254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.621437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.621469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.621689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.621720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.621855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.621887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.732 qpair failed and we were unable to recover it. 00:32:46.732 [2024-11-20 14:51:58.622079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.732 [2024-11-20 14:51:58.622113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.622384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.622415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.622601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.622632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.622771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.622803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.622990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.623022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.623137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.623170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.623291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.623323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.623576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.623607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.623844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.623876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.624046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.624078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.624254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.624286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.624469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.624501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.624644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.624675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.624850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.624882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.625096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.625129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.625338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.625371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.625548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.625579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.625762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.625793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.625967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.626004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.626122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.626154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.626413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.626445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.626634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.626666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.626928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.626971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.627232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.627264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.627476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.627508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.627689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.627731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.627946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.628007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.628162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.628196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.628469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.628501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.628709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.628746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.628974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.629012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.629208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.629240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.629367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.629400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.629606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.629638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.629839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.629871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.630051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.630084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.630222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.630257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.733 [2024-11-20 14:51:58.630480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.733 [2024-11-20 14:51:58.630516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.733 qpair failed and we were unable to recover it. 00:32:46.734 [2024-11-20 14:51:58.630653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.734 [2024-11-20 14:51:58.630686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.734 qpair failed and we were unable to recover it. 00:32:46.734 [2024-11-20 14:51:58.630964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.734 [2024-11-20 14:51:58.631005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.734 qpair failed and we were unable to recover it. 00:32:46.734 [2024-11-20 14:51:58.631204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.734 [2024-11-20 14:51:58.631239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.734 qpair failed and we were unable to recover it. 00:32:46.734 [2024-11-20 14:51:58.631431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.734 [2024-11-20 14:51:58.631463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.734 qpair failed and we were unable to recover it. 00:32:46.734 [2024-11-20 14:51:58.631737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.734 [2024-11-20 14:51:58.631769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.734 qpair failed and we were unable to recover it. 00:32:46.734 [2024-11-20 14:51:58.631987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.734 [2024-11-20 14:51:58.632021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.734 qpair failed and we were unable to recover it. 00:32:46.734 [2024-11-20 14:51:58.632220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.734 [2024-11-20 14:51:58.632255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.734 qpair failed and we were unable to recover it. 00:32:46.734 [2024-11-20 14:51:58.632475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.734 [2024-11-20 14:51:58.632510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.734 qpair failed and we were unable to recover it. 00:32:46.734 [2024-11-20 14:51:58.632719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.734 [2024-11-20 14:51:58.632754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.734 qpair failed and we were unable to recover it. 00:32:46.734 [2024-11-20 14:51:58.632972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.734 [2024-11-20 14:51:58.633006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:46.734 qpair failed and we were unable to recover it. 00:32:47.015 [2024-11-20 14:51:58.633249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.015 [2024-11-20 14:51:58.633281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.015 qpair failed and we were unable to recover it. 00:32:47.015 [2024-11-20 14:51:58.633407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.015 [2024-11-20 14:51:58.633439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.015 qpair failed and we were unable to recover it. 00:32:47.015 [2024-11-20 14:51:58.633644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.015 [2024-11-20 14:51:58.633677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.015 qpair failed and we were unable to recover it. 00:32:47.015 [2024-11-20 14:51:58.633855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.015 [2024-11-20 14:51:58.633886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.015 qpair failed and we were unable to recover it. 00:32:47.015 [2024-11-20 14:51:58.634063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.015 [2024-11-20 14:51:58.634097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.015 qpair failed and we were unable to recover it. 00:32:47.015 [2024-11-20 14:51:58.634288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.015 [2024-11-20 14:51:58.634333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.015 qpair failed and we were unable to recover it. 00:32:47.015 [2024-11-20 14:51:58.634541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.015 [2024-11-20 14:51:58.634587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.015 qpair failed and we were unable to recover it. 00:32:47.015 [2024-11-20 14:51:58.634806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.634853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.635143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.635193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.635342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.635383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.635600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.635654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.635808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.635855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.636154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.636201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.636492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.636539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.636832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.636879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.637159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.637208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.637365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.637407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.637703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.637750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.637885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.637928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.638226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.638274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.638541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.638587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.638753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.638787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.638985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.639019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.639201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.639233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.639414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.639446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.639625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.639657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.639796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.639828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.640017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.640051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.640259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.640291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.640399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.640430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.640614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.640645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.640830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.640862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.641051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.641085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.641204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.641235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.641417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.641448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.641583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.641614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.641807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.641839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.642106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.642141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.642354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.642386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.642516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.642547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.642729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.642761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.642892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.642924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.643177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.643210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.643470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.016 [2024-11-20 14:51:58.643502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.016 qpair failed and we were unable to recover it. 00:32:47.016 [2024-11-20 14:51:58.643696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.643728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.643896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.643928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.644145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.644178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.644299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.644330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.644519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.644551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.644685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.644716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.644833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.644871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.645131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.645165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.645348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.645380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.645618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.645649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.645780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.645812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.646081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.646115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.646329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.646361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.646552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.646583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.646703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.646735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.646910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.646942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.647216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.647249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.647424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.647456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.647654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.647686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.647875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.647907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.648126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.648160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.648347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.648379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.648558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.648590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.648731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.648762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.648942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.648985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.649196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.649228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.649465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.649497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.649684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.649716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.649889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.649921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.650127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.650160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.650400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.650432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.650691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.650724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.650857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.650888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.651142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.651176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.651290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.651322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.651502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.651533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.651770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.651802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.017 qpair failed and we were unable to recover it. 00:32:47.017 [2024-11-20 14:51:58.652039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.017 [2024-11-20 14:51:58.652073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.652180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.652212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.652406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.652438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.652632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.652664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.652873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.652904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.653091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.653123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.653308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.653340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.653523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.653555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.653739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.653771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.653940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.653987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.654127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.654159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.654336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.654368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.654564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.654595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.654796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.654828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.654996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.655030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.655294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.655326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.655494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.655527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.655740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.655773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.655913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.655945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.656178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.656210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.656425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.656457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.656638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.656670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.656909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.656940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.657153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.657185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.657364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.657397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.657635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.657667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.657848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.657880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.658015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.658049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.658261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.658292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.658470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.658501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.658698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.658731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.658842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.658873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.659048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.659081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.659252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.659284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.659479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.659511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.659733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.659764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.659888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.659921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.018 [2024-11-20 14:51:58.660120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.018 [2024-11-20 14:51:58.660152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.018 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.660341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.660373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.660557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.660589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.660829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.660861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.661032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.661065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.661262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.661294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.661476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.661508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.661772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.661804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.661927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.661970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.662140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.662172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.662287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.662319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.662435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.662466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.662582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.662619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.662739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.662771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.663036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.663070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.663188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.663220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.663459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.663491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.663670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.663701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.663910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.663942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.664087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.664119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.664409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.664441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.664562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.664594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.664701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.664733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.664912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.664945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.665071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.665103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.665214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.665246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.665363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.665395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.665634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.665667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.665874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.665906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.666164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.666196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.666382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.666414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.666608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.666639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.666877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.666909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.667212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.667244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.667488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.667520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.667643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.667674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.667928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.667992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.668167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.668200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.019 qpair failed and we were unable to recover it. 00:32:47.019 [2024-11-20 14:51:58.668398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.019 [2024-11-20 14:51:58.668429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.668745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.668778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.669018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.669052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.669319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.669350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.669618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.669650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.669888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.669920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.670151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.670184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.670314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.670346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.670549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.670582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.670801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.670833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.671044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.671077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.671266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.671299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.671512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.671544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.671805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.671836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.672015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.672053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.672247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.672279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.672540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.672572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.672767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.672799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.673020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.673053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.673340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.673373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.673556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.673588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.673771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.673803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.673989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.674022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.674273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.674305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.674488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.674519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.674757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.674789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.674921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.674959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.675150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.675181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.675366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.675399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.675586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.675617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.675753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.675784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.675993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.020 [2024-11-20 14:51:58.676026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.020 qpair failed and we were unable to recover it. 00:32:47.020 [2024-11-20 14:51:58.676294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.676326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.676584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.676616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.676750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.676782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.676971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.677004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.677266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.677298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.677421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.677452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.677715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.677747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.677969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.678002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.678263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.678295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.678487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.678519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.678697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.678728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.678987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.679023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.679198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.679228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.679412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.679444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.679663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.679695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.679882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.679913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.680068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.680102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.680239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.680271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.680460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.680491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.680739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.680771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.680963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.680995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.681191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.681223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.681410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.681448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.681564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.681596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.681811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.681842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.682024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.682058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.682186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.682218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.682407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.682439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.682611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.682640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.682831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.682861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.683050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.683082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.683218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.683248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.683467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.683499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.683674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.683708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.683835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.683867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.684041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.684076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.684270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.684303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.684479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.021 [2024-11-20 14:51:58.684510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.021 qpair failed and we were unable to recover it. 00:32:47.021 [2024-11-20 14:51:58.684748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.684780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.684967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.685000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.685183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.685216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.685458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.685490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.685674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.685706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.685972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.686005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.686138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.686170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.686410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.686442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.686695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.686727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.687010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.687044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.687218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.687250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.687494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.687563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.687694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.687730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.687917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.687961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.688156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.688189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.688371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.688402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.688579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.688610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.688800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.688832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.689020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.689053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.689236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.689268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.689398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.689429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.689666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.689697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.689870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.689901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.690088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.690121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.690331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.690362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.690516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.690548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.690782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.690813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.690997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.691030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.691234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.691266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.691534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.691571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.691701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.691744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.692012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.692045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.692280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.692320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.692509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.692550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.692735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.692768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.692958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.692992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.693166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.693197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.693399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.022 [2024-11-20 14:51:58.693431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.022 qpair failed and we were unable to recover it. 00:32:47.022 [2024-11-20 14:51:58.693568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.693607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.693742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.693774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.693960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.693995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.694189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.694220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.694340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.694371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.694562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.694594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.694778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.694810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.694997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.695031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.695204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.695236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.695427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.695459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.695666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.695699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.695815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.695848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.696035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.696069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.696267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.696299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.696424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.696456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.696719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.696751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.696861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.696892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.697112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.697145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.697257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.697289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.697504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.697535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.697709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.697740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.697870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.697903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.698085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.698117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.698287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.698319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.698507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.698539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.698659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.698691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.698881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.698914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.699042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.699074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.699312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.699345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.699617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.699648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.699778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.699811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.699995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.700029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.700265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.700297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.700418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.700449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.700686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.700717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.700902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.700932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.701065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.701098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.701208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.701239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.023 qpair failed and we were unable to recover it. 00:32:47.023 [2024-11-20 14:51:58.701369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.023 [2024-11-20 14:51:58.701401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.701569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.701600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.701773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.701805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.701936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.702087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.702331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.702364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.702630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.702661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.702786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.702818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.703010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.703044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.703226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.703259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.703513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.703545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.703785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.703817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.704024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.704056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.704176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.704209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.704339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.704371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.704545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.704577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.704748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.704780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.704968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.705002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.705126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.705159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.705352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.705384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.705598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.705630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.705819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.705852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.706025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.706058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.706247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.706280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.706543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.706575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.706693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.706724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.706905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.706936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.707066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.707097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.707227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.707259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.707452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.707484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.707654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.707685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.707898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.707936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.708118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.708151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.708323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.708355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.708474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.708505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.708766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.708799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.708980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.709014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.709204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.024 [2024-11-20 14:51:58.709237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.024 qpair failed and we were unable to recover it. 00:32:47.024 [2024-11-20 14:51:58.709479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.709511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.709636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.709668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.709841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.709872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.710051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.710085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.710365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.710397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.710566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.710599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.710857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.710889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.711155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.711191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.711376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.711409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.711604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.711635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.711812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.711843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.712021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.712054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.712234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.712266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.712528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.712561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.712753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.712786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.712890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.712921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.713119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.713152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.713343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.713375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.713546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.713577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.713843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.713875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.714115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.714149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.714270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.714301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.714437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.714470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.714601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.714632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.714822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.714854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.715022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.715055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.715176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.715207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.715496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.715527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.715716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.715748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.715931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.715970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.716171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.716202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.716330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.716362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.716526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.716560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.716800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.716832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.025 qpair failed and we were unable to recover it. 00:32:47.025 [2024-11-20 14:51:58.717010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.025 [2024-11-20 14:51:58.717048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.717245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.717278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.717405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.717438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.717624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.717656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.717832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.717864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.718038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.718071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.718241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.718273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.718513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.718545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.718678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.718709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.718896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.718928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.719071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.719103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.719393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.719425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.719598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.719630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.719762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.719795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.719933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.719981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.720173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.720205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.720390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.720422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.720667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.720698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.720885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.720916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.721057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.721090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.721330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.721361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.721475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.721507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.721625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.721657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.721762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.721793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.721971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.722004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.722194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.722226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.722436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.722467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.722708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.722745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.722916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.722956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.723072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.723103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.723291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.723323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.723510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.723542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.723678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.723709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.723973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.724006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.724192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.724223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.724430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.724462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.724660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.724692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.724865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.724896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.026 [2024-11-20 14:51:58.725124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.026 [2024-11-20 14:51:58.725158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.026 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.725361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.725392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.725668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.725699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.725873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.725905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.726091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.726125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.726239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.726270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.726459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.726490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.726688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.726720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.726887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.726918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.727228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.727260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.727395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.727427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.727620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.727651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.727821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.727853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.728024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.728057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.728234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.728266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.728464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.728495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.728690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.728721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.728903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.728936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.729129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.729163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.729346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.729377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.729638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.729669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.729861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.729893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.730131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.730164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.730366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.730398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.730646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.730677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.730860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.730892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.731084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.731118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.731323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.731355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.731545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.731577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.731704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.731736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.731906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.731943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.732143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.732176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.732456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.732488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.732669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.732701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.732883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.732914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.733036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.733069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.733185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.733217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.733410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.733442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.027 [2024-11-20 14:51:58.733569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.027 [2024-11-20 14:51:58.733603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.027 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.733740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.733771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.734011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.734044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.734227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.734260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.734365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.734397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.734589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.734621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.734750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.734782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.734909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.734941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.735138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.735170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.735355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.735387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.735499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.735531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.735643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.735674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.735788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.735820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.736001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.736035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.736208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.736240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.736427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.736459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.736640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.736672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.736932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.736994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.737110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.737142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.737311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.737348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.737469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.737501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.737676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.737708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.737806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.737837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.738017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.738049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.738263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.738294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.738497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.738529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.738786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.738816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.738987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.739019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.739151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.739182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.739371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.739403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.739601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.739634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.739813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.739844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.739973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.740005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.740194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.740227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.740401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.740431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.740619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.740651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.740788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.740820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.740994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.741026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.741209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.028 [2024-11-20 14:51:58.741240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.028 qpair failed and we were unable to recover it. 00:32:47.028 [2024-11-20 14:51:58.741415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.741446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.741615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.741646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.741882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.741913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.742131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.742164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.742270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.742301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.742416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.742447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.742560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.742593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.742697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.742728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.742907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.742940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.743065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.743096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.743367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.743400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.743573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.743605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.743868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.743900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.744148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.744182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.744373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.744405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.744606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.744637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.744896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.744927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.745185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.745218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.745400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.745432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.745632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.745663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.745857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.745889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.746154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.746198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.746316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.746349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.746543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.746575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.746751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.746783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.746912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.746944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.747053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.747085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.747288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.747320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.747436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.747468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.747641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.747673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.747845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.747878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.748139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.748173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.748364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.748397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.029 [2024-11-20 14:51:58.748579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.029 [2024-11-20 14:51:58.748612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.029 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.748793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.748826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.748994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.749029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.749219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.749251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.749431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.749463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.749583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.749615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.749868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.749899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.750018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.750051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.750169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.750201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.750317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.750349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.750533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.750564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.750755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.750787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.750974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.751006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.751244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.751276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.751555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.751587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.751766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.751798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.752042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.752076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.752251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.752284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.752410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.752441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.752634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.752666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.752855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.752887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.753076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.753109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.753290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.753322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.753487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.753519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.753686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.753719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.753851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.753884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.754093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.754126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.754299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.754332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.754520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.754551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.754726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.754759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.754874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.754907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.755179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.755212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.755405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.755437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.755555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.755588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.755711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.755743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.755966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.755998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.756100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.756130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.756320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.756353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.756601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.030 [2024-11-20 14:51:58.756632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.030 qpair failed and we were unable to recover it. 00:32:47.030 [2024-11-20 14:51:58.756813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.756845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.757130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.757163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.757372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.757404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.757576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.757607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.757719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.757751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.757878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.757910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.758100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.758134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.758328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.758360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.758561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.758594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.758713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.758744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.758914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.758946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.759148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.759181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.759307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.759339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.759450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.759482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.759693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.759725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.759921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.759961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.760137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.760168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.760371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.760409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.760510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.760540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.760720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.760751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.760877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.760908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.761099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.761132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.761301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.761332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.761536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.761568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.761686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.761718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.761926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.761970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.762112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.762144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.762260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.762291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.762410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.762441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.762652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.762684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.762806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.762837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.763025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.763057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.763182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.763215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.763408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.763440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.763612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.763645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.763885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.763915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.764115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.764147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.764245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.764275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.031 qpair failed and we were unable to recover it. 00:32:47.031 [2024-11-20 14:51:58.764390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.031 [2024-11-20 14:51:58.764421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.764686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.764716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.764898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.764930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.765161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.765192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.765305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.765337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.765504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.765536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.765802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.765835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.766057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.766090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.766188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.766221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.766357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.766389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.766571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.766604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.766843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.766875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.766997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.767029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.767147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.767178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.767361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.767393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.767591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.767623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.767809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.767840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.768022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.768054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.768162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.768194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.768399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.768430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.768627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.768664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.768864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.768896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.769162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.769194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.769405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.769438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.769610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.769642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.769758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.769790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.769975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.770007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.770191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.770223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.770409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.770440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.770704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.770736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.770928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.770971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.771104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.771135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.771373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.771406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.771541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.771572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.771698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.771731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.771847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.771878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.772119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.772153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.772336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.772367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.772487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.032 [2024-11-20 14:51:58.772518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.032 qpair failed and we were unable to recover it. 00:32:47.032 [2024-11-20 14:51:58.772721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.772754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.772877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.772908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.773047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.773081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.773268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.773300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.773425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.773457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.773656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.773687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.773867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.773899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.774030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.774085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.774298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.774335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.774538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.774569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.774683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.774714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.774835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.774867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.774990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.775022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.775258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.775290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.775418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.775448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.775552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.775584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.775818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.775849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.775963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.775994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.776232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.776262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.776499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.776529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.776714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.776745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.776863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.776894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.777120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.777153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.777339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.777370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.777541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.777571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.777742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.777773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.777890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.777921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.778132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.778163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.778280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.778311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.778577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.778608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.778781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.778811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.779066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.779098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.779265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.779296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.779427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.779457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.779713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.779744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.779931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.779970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.780157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.780188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.780364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.033 [2024-11-20 14:51:58.780394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.033 qpair failed and we were unable to recover it. 00:32:47.033 [2024-11-20 14:51:58.780568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.780599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.780868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.780898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.781047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.781079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.781322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.781354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.781545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.781576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.781798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.781828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.781967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.782000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.782187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.782218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.782323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.782354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.782532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.782562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.782758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.782789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.782908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.782944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.783148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.783180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.783380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.783411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.783515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.783545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.783720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.783752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.783918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.783957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.784236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.784267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.784447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.784478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.784574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.784605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.784773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.784804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.784942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.785003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.785208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.785239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.785447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.785476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.785615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.785647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.785828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.785859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.786032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.786064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.786328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.786360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.786539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.786569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.786764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.786795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.786976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.787009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.787198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.787230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.787458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.787489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.787675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.034 [2024-11-20 14:51:58.787706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.034 qpair failed and we were unable to recover it. 00:32:47.034 [2024-11-20 14:51:58.787818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.787849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.787988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.788020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.788199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.788230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.788535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.788566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.788676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.788717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.788901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.788932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.789194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.789226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.789424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.789455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.789569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.789601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.789813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.789844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.790038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.790071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.790238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.790270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.790384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.790414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.790623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.790654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.790783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.790815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.790957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.790990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.791180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.791212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.791386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.791418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.791650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.791721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.791994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.792031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.792157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.792189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.792359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.792392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.792636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.792667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.792786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.792818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.793059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.793092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.793266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.793298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.793468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.793499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.793618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.793649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.793856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.793889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.794088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.794121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.794242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.794272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.794406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.794447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.794710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.794741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.794921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.794963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.795140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.795171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.795361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.795392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.795533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.795568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.795743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.035 [2024-11-20 14:51:58.795774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.035 qpair failed and we were unable to recover it. 00:32:47.035 [2024-11-20 14:51:58.795982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.796015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.796290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.796322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.796588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.796619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.796827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.796859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.796988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.797021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.797218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.797249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.797387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.797418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.797638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.797668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.797869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.797899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.798030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.798062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.798265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.798296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.798468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.798500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.798617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.798647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.798766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.798797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.799004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.799037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.799273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.799304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.799509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.799540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.799727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.799757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.799958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.799991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.800103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.800134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.800272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.800303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.800480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.800510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.800691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.800721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.800903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.800935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.801213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.801246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.801373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.801405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.801537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.801567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.801686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.801717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.801843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.801873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.802010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.802041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.802279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.802311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.802549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.802579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.802813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.802843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.802944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.802990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.803244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.803274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.803453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.803483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.803729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.803760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.803991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.036 [2024-11-20 14:51:58.804021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.036 qpair failed and we were unable to recover it. 00:32:47.036 [2024-11-20 14:51:58.804143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.804172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.804338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.804369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.804556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.804587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.804798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.804829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.805006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.805037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.805227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.805258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.805447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.805478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.805658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.805688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.805853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.805885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.806080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.806113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.806287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.806317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.806576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.806606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.806773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.806803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.807040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.807073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.807318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.807349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.807543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.807574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.807764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.807796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.808057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.808088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.808343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.808375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.808587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.808618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.808891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.808922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.809103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.809134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.809324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.809356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.809544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.809575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.809705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.809735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.809996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.810028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.810209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.810239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.810498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.810529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.810793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.810825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.811076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.811108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.811235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.811265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.811444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.811474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.811646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.811678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.811850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.811881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.812056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.812086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.812320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.812359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.037 [2024-11-20 14:51:58.812493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.037 [2024-11-20 14:51:58.812523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.037 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.812753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.812785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.812894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.812926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.813216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.813248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.813374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.813405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.813575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.813605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.813742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.813772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.813943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.813985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.814224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.814255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.814461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.814492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.814661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.814692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.814880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.814911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.815183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.815214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.815486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.815518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.815693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.815724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.816002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.816035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.816218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.816249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.816373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.816404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.816599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.816630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.816821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.816852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.817092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.817125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.817313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.817346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.817528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.817559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.817727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.817758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.818016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.818049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.818244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.818275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.818563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.818636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.818844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.818881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.819153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.819188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.819429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.819461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.819653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.819685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.819790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.819821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.038 [2024-11-20 14:51:58.820085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.038 [2024-11-20 14:51:58.820117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.038 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.820245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.820276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.820402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.820433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.820616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.820646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.820821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.820853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.820969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.821002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.821254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.821285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.821405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.821436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.821641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.821673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.821857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.821888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.822099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.822131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.822367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.822398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.822528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.822561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.822674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.822705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.822874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.822905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.823102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.823134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.823306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.823336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.823522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.823554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.823743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.823775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.823965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.823998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.824243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.824275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.824403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.824441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.824635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.824666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.824860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.824891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.825008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.825041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.825151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.825184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.825374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.825405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.825575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.825607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.825898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.825928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.826053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.826085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.826205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.826236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.826469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.826499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.826735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.826766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.826880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.826911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.827038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.827071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.827314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.827346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.827550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.827582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.827764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.827795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.828031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.039 [2024-11-20 14:51:58.828063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.039 qpair failed and we were unable to recover it. 00:32:47.039 [2024-11-20 14:51:58.828325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.828357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.828529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.828559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.828749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.828781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.829034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.829067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.829291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.829323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.829450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.829480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.829688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.829719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.829900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.829930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.830124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.830156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.830347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.830380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.830592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.830625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.830810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.830842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.831019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.831051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.831240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.831273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.831463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.831493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.831595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.831627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.831761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.831794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.831974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.832005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.832184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.832216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.832330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.832360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.832551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.832582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.832822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.832854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.833108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.833142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.833333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.833367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.833487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.833518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.833786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.833818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.833995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.834028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.834142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.834172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.834304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.834335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.834511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.834541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.834726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.834757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.834875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.834907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.835097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.835128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.835337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.835368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.835653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.835684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.835871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.835903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.836017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.836049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.040 qpair failed and we were unable to recover it. 00:32:47.040 [2024-11-20 14:51:58.836242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.040 [2024-11-20 14:51:58.836273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.836389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.836420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.836558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.836589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.836715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.836747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.837016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.837049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.837173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.837204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.837475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.837506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.837640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.837671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.837846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.837876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.838046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.838080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.838273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.838304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.838487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.838518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.838720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.838752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.838989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.839027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.839282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.839313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.839440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.839470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.839733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.839765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.839883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.839913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.840165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.840198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.840386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.840418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.840550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.840582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.840818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.840850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.841056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.841089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.841326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.841357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.841595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.841627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.841804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.841835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.842079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.842111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.842421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.842454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.842623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.842654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.842866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.842896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.843111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.843143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.843259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.843290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.843429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.843460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.843597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.843630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.843745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.843776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.844230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.844271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.844459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.844493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.844617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.844649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.041 [2024-11-20 14:51:58.844770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.041 [2024-11-20 14:51:58.844801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.041 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.844907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.844939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.845195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.845229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.845486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.845519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.845704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.845735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.845989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.846023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.846235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.846268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.846398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.846430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.846547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.846578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.846751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.846783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.846974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.847007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.847145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.847177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.847364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.847396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.847516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.847546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.847761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.847791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.847915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.847956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.848128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.848166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.848357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.848389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.848652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.848683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.848863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.848894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.849093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.849125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.849387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.849419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.849525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.849555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.849788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.849819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.849995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.850028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.850142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.850173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.850300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.850331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.850516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.850546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.850648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.850680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.850798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.850829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.850962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.850995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.851109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.851140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.851332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.851363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.851476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.851508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.851693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.851725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.851841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.851871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.852085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.852117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.852240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.852272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.852414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.042 [2024-11-20 14:51:58.852446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.042 qpair failed and we were unable to recover it. 00:32:47.042 [2024-11-20 14:51:58.852623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.852655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.852848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.852881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.853074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.853106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.853280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.853311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.853432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.853469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.853684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.853716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.853885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.853916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.854218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.854282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.854470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.854542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.854745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.854779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.854906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.854937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.855127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.855161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.855336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.855369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.855505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.855538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.855646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.855678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.855796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.855828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.856019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.856052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.856241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.856273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.856400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.856433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.856542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.856573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.856693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.856724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.856967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.857000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.857248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.857279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.857452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.857484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.857663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.857694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.857878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.857909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.858118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.858151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.858280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.858311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.858491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.858522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.858700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.858731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.858920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.858960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.859142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.859176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.043 [2024-11-20 14:51:58.859367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.043 [2024-11-20 14:51:58.859398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.043 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.859605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.859636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.859751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.859781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.859963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.859996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.860111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.860142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.860405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.860437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.860623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.860654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.860822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.860854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.860994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.861026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.861329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.861360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.861470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.861500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.861711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.861743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.861865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.861895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.862115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.862151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.862340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.862372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.862474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.862506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.862625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.862657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.862852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.862882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.863064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.863098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.863231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.863262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.863442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.863474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.863666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.863697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.863945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.863984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.864165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.864196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.864372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.864403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.864581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.864614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.864800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.864843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.865099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.865132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.865324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.865355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.865609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.865640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.865752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.865783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.865906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.865937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.866184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.866215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.866387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.866419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.866610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.866641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.866769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.866801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.866992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.867025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.867202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.867233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.044 [2024-11-20 14:51:58.867421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.044 [2024-11-20 14:51:58.867452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.044 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.867626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.867657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.867784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.867816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.868000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.868032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.868161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.868193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.868378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.868409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.868612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.868644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.868833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.868865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.869112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.869144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.869316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.869347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.869461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.869493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.869758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.869789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.869905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.869935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.870193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.870225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.870337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.870367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.870540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.870572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.870765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.870797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.870903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.870934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.871135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.871166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.871403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.871434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.871634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.871665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.871768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.871798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.871990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.872023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.872137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.872169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.872286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.872317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.872455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.872487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.872612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.872642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.872839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.872871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.873055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.873087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.873269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.873306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.873579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.873611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.873799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.873830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.874031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.874064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.874270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.874302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.874563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.874594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.874713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.874745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.874861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.874892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.875020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.875053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.875241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.045 [2024-11-20 14:51:58.875272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.045 qpair failed and we were unable to recover it. 00:32:47.045 [2024-11-20 14:51:58.875440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.875471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.875641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.875672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.875849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.875880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.875997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.876030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.876284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.876315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.876572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.876603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.876790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.876822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.877059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.877090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.877345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.877377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.877564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.877595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.877771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.877803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.878007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.878040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.878299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.878330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.878457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.878489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.878618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.878649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.878860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.878891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.879095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.879127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.879385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.879422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.879546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.879578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.879759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.879790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.879997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.880029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.880266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.880297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.880498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.880528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.880648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.880679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.880887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.880919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.881189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.881220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.881331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.881362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.881599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.881630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.881745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.881776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.881909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.881939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.882224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.882257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.882456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.882488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.882669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.882700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.882879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.882910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.883058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.883090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.883256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.883287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.883522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.883552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.883667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.883699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.046 qpair failed and we were unable to recover it. 00:32:47.046 [2024-11-20 14:51:58.883870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.046 [2024-11-20 14:51:58.883901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.884095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.884128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.884297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.884328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.884456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.884488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.884683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.884715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.884890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.884922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.885115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.885147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.885443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.885475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.885688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.885720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.885898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.885930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.886133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.886165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.886303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.886334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.886540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.886572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.886691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.886722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.886839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.886870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.887108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.887141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.887321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.887353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.887469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.887501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.887762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.887793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.887966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.887999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.888237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.888275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.888533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.888565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.888735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.888765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.888960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.888992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.889124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.889156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.889392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.889423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.889599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.889630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.889879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.889911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.890120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.890153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.890268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.890299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.890535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.890565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.890758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.890789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.890970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.891003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.891214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.891245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.891375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.891407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.891645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.891677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.891861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.891892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.892161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.892193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.047 [2024-11-20 14:51:58.892296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.047 [2024-11-20 14:51:58.892328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.047 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.892440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.892472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.892664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.892695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.892968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.893001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.893182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.893213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.893414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.893445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.893626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.893658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.893843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.893873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.894063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.894096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.894333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.894370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.894559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.894591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.894708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.894740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.894919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.894959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.895072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.895102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.895217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.895248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.895351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.895382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.895568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.895598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.895837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.895869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.896083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.896115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.896319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.896350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.896587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.896618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.896880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.896911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.897034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.897066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.897190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.897222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.897348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.897379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.897585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.897616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.897737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.897768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.897992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.898025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.898199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.898231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.898422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.898453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.898660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.898690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.898872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.898903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.899023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.899054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.899182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.899213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.048 [2024-11-20 14:51:58.899348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.048 [2024-11-20 14:51:58.899378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.048 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.899556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.899588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.899758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.899790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.900035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.900067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.900303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.900334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.900513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.900545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.900731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.900762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.900940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.900980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.901170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.901202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.901302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.901334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.901538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.901569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.901751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.901783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.901993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.902025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.902228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.902259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.902447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.902478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.902672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.902704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.902918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.902971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.903199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.903230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.903337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.903369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.903635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.903666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.903838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.903869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.903997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.904029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.904293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.904324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.904434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.904465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.904668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.904700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.904938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.904979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.905260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.905292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.905564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.905595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.905767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.905799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.906036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.906070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.906311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.906343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.906546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.906577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.906764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.906795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.906969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.907001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.907120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.907152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.907391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.907423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.907628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.907659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.907846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.907876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.049 [2024-11-20 14:51:58.907999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.049 [2024-11-20 14:51:58.908031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.049 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.908225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.908256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.908425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.908456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.908695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.908726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.908967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.908998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.909180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.909222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.909463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.909494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.909666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.909697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.909936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.909981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.910085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.910116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.910287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.910319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.910588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.910619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.910820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.910851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.911119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.911152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.911411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.911442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.911644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.911675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.911858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.911890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.912113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.912145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.912274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.912305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.912543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.912613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.912760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.912795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.912982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.913015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.913145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.913175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.913468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.913498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.913615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.913644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.913812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.913844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.914035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.914068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.914256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.914288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.914413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.914443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.914623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.914653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.914827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.914858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.915047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.915080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.915203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.915243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.915412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.915442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.915673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.915703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.915874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.915905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.916166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.916197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.050 qpair failed and we were unable to recover it. 00:32:47.050 [2024-11-20 14:51:58.916435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.050 [2024-11-20 14:51:58.916466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.916595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.916626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.916867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.916897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.917168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.917201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.917388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.917419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.917656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.917688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.917924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.917966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.918070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.918100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.918283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.918314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.918500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.918530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.918735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.918766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.918877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.918908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.919031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.919062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.919311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.919344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.919516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.919547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.919783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.919815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.920006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.920039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.920227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.920259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.920547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.920578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.920704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.920735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.920905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.920936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.921185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.921216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.921461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.921533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.921749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.921783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.921994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.922030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.922225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.922257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.922511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.922544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.922651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.922683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.922807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.922838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.922967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.923000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.923262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.923295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.923507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.923540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.923778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.923810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.923994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.924028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.924323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.924355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.924490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.924531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.924721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.924752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.051 [2024-11-20 14:51:58.924882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.051 [2024-11-20 14:51:58.924914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.051 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.925190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.925223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.925507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.925538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.925731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.925763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.925876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.925908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.926093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.926124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.926374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.926406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.926586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.926618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.926794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.926826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.926998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.927031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.927166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.927197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.927455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.927486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.927667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.927699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.927965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.927999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.928186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.928218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.928340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.928371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.928541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.928574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.928694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.928725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.929005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.929038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.929226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.929259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.929452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.929483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.929620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.929652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.929795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.929827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.929943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.929986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.930170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.930202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.930486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.930519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.930650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.930682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.930856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.930886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.931008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.931041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.931312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.931343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.931476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.931507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.931695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.931727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.931859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.052 [2024-11-20 14:51:58.931890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.052 qpair failed and we were unable to recover it. 00:32:47.052 [2024-11-20 14:51:58.932104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.932137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.932326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.932358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.932559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.932590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.932803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.932835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.933095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.933128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.933300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.933331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.933518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.933550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.933674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.933705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.933992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.934026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.934135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.934166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.934347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.934378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.934560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.934591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.934780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.934811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.934995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.935028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.935213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.935245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.935384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.935416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.935585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.935617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.935819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.935851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.936125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.936158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.936338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.936370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.936557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.936588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.936825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.936857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.936971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.937002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.937261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.937293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.937574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.937606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.937790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.937822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.937992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.938023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.938229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.938259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.938498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.938532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.938770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.938802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.938975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.939008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.939184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.939217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.939394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.939431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.939615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.939646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.939831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.939862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.940051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.940084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.940326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.940358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.940583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.053 [2024-11-20 14:51:58.940615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.053 qpair failed and we were unable to recover it. 00:32:47.053 [2024-11-20 14:51:58.940796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.940827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.941078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.941109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.941288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.941321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.941570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.941602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.941840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.941872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.942043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.942076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.942209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.942241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.942454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.942486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.942676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.942708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.942915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.942960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.943150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.943181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.943382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.943414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.943671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.943704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.943853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.943884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.944000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.944033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.944270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.944303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.944481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.944512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.944700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.944732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.944938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.945011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.945244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.945282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.945485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.945520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.945660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.945694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.945878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.945917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.946106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.946141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.946265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.946298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.946537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.946569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.946744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.946777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.946911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.946944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.947154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.947202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.947393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.947429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.947604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.947636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.947804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.947836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.948102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.948142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.948390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.948423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.948543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.948583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.948845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.948878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.949075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.949109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.054 [2024-11-20 14:51:58.949319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.054 [2024-11-20 14:51:58.949357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.054 qpair failed and we were unable to recover it. 00:32:47.055 [2024-11-20 14:51:58.949623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.055 [2024-11-20 14:51:58.949659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.055 qpair failed and we were unable to recover it. 00:32:47.055 [2024-11-20 14:51:58.949929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.055 [2024-11-20 14:51:58.949978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.055 qpair failed and we were unable to recover it. 00:32:47.342 [2024-11-20 14:51:58.950104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.342 [2024-11-20 14:51:58.950135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.342 qpair failed and we were unable to recover it. 00:32:47.342 [2024-11-20 14:51:58.950338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.342 [2024-11-20 14:51:58.950370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.342 qpair failed and we were unable to recover it. 00:32:47.342 [2024-11-20 14:51:58.950560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.342 [2024-11-20 14:51:58.950592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.342 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.950718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.950750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.951022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.951072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.951367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.951415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.951569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.951610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.951884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.951929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.952258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.952310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.952608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.952654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.952872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.952915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.953231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.953281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.953574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.953621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.953762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.953805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.954094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.954143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.954374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.954423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.954703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.954750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.954968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.955018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.955176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.955218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.955477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.955516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.955780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.955812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.956060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.956096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.956344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.956377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.956510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.956542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.956671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.956704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.956878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.956911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.957042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.957077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.957186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.957218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.957406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.957438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.957562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.957594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.957706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.957737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.957854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.957886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.958154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.958189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.958385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.958418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.343 qpair failed and we were unable to recover it. 00:32:47.343 [2024-11-20 14:51:58.958536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.343 [2024-11-20 14:51:58.958575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.344 qpair failed and we were unable to recover it. 00:32:47.344 [2024-11-20 14:51:58.958759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.344 [2024-11-20 14:51:58.958792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.344 qpair failed and we were unable to recover it. 00:32:47.344 [2024-11-20 14:51:58.958984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.344 [2024-11-20 14:51:58.959019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.344 qpair failed and we were unable to recover it. 00:32:47.344 [2024-11-20 14:51:58.959210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.344 [2024-11-20 14:51:58.959242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.344 qpair failed and we were unable to recover it. 00:32:47.344 [2024-11-20 14:51:58.959427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.344 [2024-11-20 14:51:58.959460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.344 qpair failed and we were unable to recover it. 00:32:47.344 [2024-11-20 14:51:58.959591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.344 [2024-11-20 14:51:58.959624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.344 qpair failed and we were unable to recover it. 00:32:47.344 [2024-11-20 14:51:58.959740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.344 [2024-11-20 14:51:58.959772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.344 qpair failed and we were unable to recover it. 00:32:47.344 [2024-11-20 14:51:58.959942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.344 [2024-11-20 14:51:58.959984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.344 qpair failed and we were unable to recover it. 00:32:47.344 [2024-11-20 14:51:58.960108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.344 [2024-11-20 14:51:58.960141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.344 qpair failed and we were unable to recover it. 00:32:47.344 [2024-11-20 14:51:58.960246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.344 [2024-11-20 14:51:58.960276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.344 qpair failed and we were unable to recover it. 00:32:47.344 [2024-11-20 14:51:58.960455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.344 [2024-11-20 14:51:58.960488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.344 qpair failed and we were unable to recover it. 00:32:47.344 [2024-11-20 14:51:58.960600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.344 [2024-11-20 14:51:58.960633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.344 qpair failed and we were unable to recover it. 00:32:47.345 [2024-11-20 14:51:58.960823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.345 [2024-11-20 14:51:58.960855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.345 qpair failed and we were unable to recover it. 00:32:47.345 [2024-11-20 14:51:58.960983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.345 [2024-11-20 14:51:58.961016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.345 qpair failed and we were unable to recover it. 00:32:47.345 [2024-11-20 14:51:58.961154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.345 [2024-11-20 14:51:58.961188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.345 qpair failed and we were unable to recover it. 00:32:47.345 [2024-11-20 14:51:58.961363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.345 [2024-11-20 14:51:58.961396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.345 qpair failed and we were unable to recover it. 00:32:47.345 [2024-11-20 14:51:58.961660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.345 [2024-11-20 14:51:58.961693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.345 qpair failed and we were unable to recover it. 00:32:47.345 [2024-11-20 14:51:58.961966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.345 [2024-11-20 14:51:58.961999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.345 qpair failed and we were unable to recover it. 00:32:47.345 [2024-11-20 14:51:58.962204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.345 [2024-11-20 14:51:58.962236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.345 qpair failed and we were unable to recover it. 00:32:47.345 [2024-11-20 14:51:58.962340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.345 [2024-11-20 14:51:58.962374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.345 qpair failed and we were unable to recover it. 00:32:47.345 [2024-11-20 14:51:58.962546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.345 [2024-11-20 14:51:58.962578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.345 qpair failed and we were unable to recover it. 00:32:47.345 [2024-11-20 14:51:58.962697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.345 [2024-11-20 14:51:58.962730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.345 qpair failed and we were unable to recover it. 00:32:47.345 [2024-11-20 14:51:58.962913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.345 [2024-11-20 14:51:58.962945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.345 qpair failed and we were unable to recover it. 00:32:47.345 [2024-11-20 14:51:58.963078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.345 [2024-11-20 14:51:58.963110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.345 qpair failed and we were unable to recover it. 00:32:47.345 [2024-11-20 14:51:58.963283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.345 [2024-11-20 14:51:58.963315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.345 qpair failed and we were unable to recover it. 00:32:47.346 [2024-11-20 14:51:58.963565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.346 [2024-11-20 14:51:58.963599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.346 qpair failed and we were unable to recover it. 00:32:47.346 [2024-11-20 14:51:58.963772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.346 [2024-11-20 14:51:58.963803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.346 qpair failed and we were unable to recover it. 00:32:47.346 [2024-11-20 14:51:58.963917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.346 [2024-11-20 14:51:58.963960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.346 qpair failed and we were unable to recover it. 00:32:47.346 [2024-11-20 14:51:58.964221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.346 [2024-11-20 14:51:58.964255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.346 qpair failed and we were unable to recover it. 00:32:47.346 [2024-11-20 14:51:58.964510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.346 [2024-11-20 14:51:58.964542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.346 qpair failed and we were unable to recover it. 00:32:47.346 [2024-11-20 14:51:58.964729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.346 [2024-11-20 14:51:58.964761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.346 qpair failed and we were unable to recover it. 00:32:47.346 [2024-11-20 14:51:58.964992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.346 [2024-11-20 14:51:58.965026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.346 qpair failed and we were unable to recover it. 00:32:47.346 [2024-11-20 14:51:58.965275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.346 [2024-11-20 14:51:58.965307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.347 qpair failed and we were unable to recover it. 00:32:47.347 [2024-11-20 14:51:58.965521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.347 [2024-11-20 14:51:58.965554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.347 qpair failed and we were unable to recover it. 00:32:47.347 [2024-11-20 14:51:58.965736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.347 [2024-11-20 14:51:58.965768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.347 qpair failed and we were unable to recover it. 00:32:47.347 [2024-11-20 14:51:58.965888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.347 [2024-11-20 14:51:58.965920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.347 qpair failed and we were unable to recover it. 00:32:47.347 [2024-11-20 14:51:58.966260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.347 [2024-11-20 14:51:58.966321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.347 qpair failed and we were unable to recover it. 00:32:47.347 [2024-11-20 14:51:58.966595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.347 [2024-11-20 14:51:58.966665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.347 qpair failed and we were unable to recover it. 00:32:47.347 [2024-11-20 14:51:58.966905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.347 [2024-11-20 14:51:58.966939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.347 qpair failed and we were unable to recover it. 00:32:47.347 [2024-11-20 14:51:58.967190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.347 [2024-11-20 14:51:58.967222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.347 qpair failed and we were unable to recover it. 00:32:47.347 [2024-11-20 14:51:58.967425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.347 [2024-11-20 14:51:58.967466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.347 qpair failed and we were unable to recover it. 00:32:47.347 [2024-11-20 14:51:58.967574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.347 [2024-11-20 14:51:58.967605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.347 qpair failed and we were unable to recover it. 00:32:47.347 [2024-11-20 14:51:58.967788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.347 [2024-11-20 14:51:58.967820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.347 qpair failed and we were unable to recover it. 00:32:47.347 [2024-11-20 14:51:58.967970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.347 [2024-11-20 14:51:58.968004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.347 qpair failed and we were unable to recover it. 00:32:47.348 [2024-11-20 14:51:58.968196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.348 [2024-11-20 14:51:58.968226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-11-20 14:51:58.968356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.348 [2024-11-20 14:51:58.968388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-11-20 14:51:58.968562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.348 [2024-11-20 14:51:58.968593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-11-20 14:51:58.968782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.348 [2024-11-20 14:51:58.968814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-11-20 14:51:58.969093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.348 [2024-11-20 14:51:58.969129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-11-20 14:51:58.969374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.348 [2024-11-20 14:51:58.969407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-11-20 14:51:58.969600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.348 [2024-11-20 14:51:58.969631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-11-20 14:51:58.969832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.348 [2024-11-20 14:51:58.969866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.348 [2024-11-20 14:51:58.970058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.348 [2024-11-20 14:51:58.970090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.348 qpair failed and we were unable to recover it. 00:32:47.349 [2024-11-20 14:51:58.970295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.349 [2024-11-20 14:51:58.970327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-11-20 14:51:58.970466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.349 [2024-11-20 14:51:58.970499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-11-20 14:51:58.970628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.349 [2024-11-20 14:51:58.970660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-11-20 14:51:58.970828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.349 [2024-11-20 14:51:58.970861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-11-20 14:51:58.971039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.349 [2024-11-20 14:51:58.971072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-11-20 14:51:58.971334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.349 [2024-11-20 14:51:58.971367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-11-20 14:51:58.971484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.349 [2024-11-20 14:51:58.971516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-11-20 14:51:58.971776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.349 [2024-11-20 14:51:58.971808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-11-20 14:51:58.971997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.349 [2024-11-20 14:51:58.972029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-11-20 14:51:58.972224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.349 [2024-11-20 14:51:58.972256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-11-20 14:51:58.972373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.349 [2024-11-20 14:51:58.972406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.349 qpair failed and we were unable to recover it. 00:32:47.349 [2024-11-20 14:51:58.972525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.350 [2024-11-20 14:51:58.972558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.350 qpair failed and we were unable to recover it. 00:32:47.350 [2024-11-20 14:51:58.972802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.350 [2024-11-20 14:51:58.972834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.350 qpair failed and we were unable to recover it. 00:32:47.350 [2024-11-20 14:51:58.973095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.350 [2024-11-20 14:51:58.973128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.350 qpair failed and we were unable to recover it. 00:32:47.350 [2024-11-20 14:51:58.973398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.350 [2024-11-20 14:51:58.973432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.350 qpair failed and we were unable to recover it. 00:32:47.350 [2024-11-20 14:51:58.973643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.350 [2024-11-20 14:51:58.973675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.350 qpair failed and we were unable to recover it. 00:32:47.350 [2024-11-20 14:51:58.973858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.350 [2024-11-20 14:51:58.973891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.350 qpair failed and we were unable to recover it. 00:32:47.350 [2024-11-20 14:51:58.974090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.350 [2024-11-20 14:51:58.974123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.350 qpair failed and we were unable to recover it. 00:32:47.350 [2024-11-20 14:51:58.974249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.350 [2024-11-20 14:51:58.974281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.350 qpair failed and we were unable to recover it. 00:32:47.350 [2024-11-20 14:51:58.974400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.350 [2024-11-20 14:51:58.974433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.350 qpair failed and we were unable to recover it. 00:32:47.350 [2024-11-20 14:51:58.974623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.350 [2024-11-20 14:51:58.974655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.350 qpair failed and we were unable to recover it. 00:32:47.350 [2024-11-20 14:51:58.974846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.350 [2024-11-20 14:51:58.974878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.350 qpair failed and we were unable to recover it. 00:32:47.350 [2024-11-20 14:51:58.975142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.351 [2024-11-20 14:51:58.975175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.351 qpair failed and we were unable to recover it. 00:32:47.351 [2024-11-20 14:51:58.975293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.351 [2024-11-20 14:51:58.975324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.351 qpair failed and we were unable to recover it. 00:32:47.351 [2024-11-20 14:51:58.975515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.351 [2024-11-20 14:51:58.975547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.351 qpair failed and we were unable to recover it. 00:32:47.351 [2024-11-20 14:51:58.975805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.351 [2024-11-20 14:51:58.975839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.351 qpair failed and we were unable to recover it. 00:32:47.351 [2024-11-20 14:51:58.976044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.351 [2024-11-20 14:51:58.976077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.351 qpair failed and we were unable to recover it. 00:32:47.351 [2024-11-20 14:51:58.976249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.351 [2024-11-20 14:51:58.976281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.351 qpair failed and we were unable to recover it. 00:32:47.351 [2024-11-20 14:51:58.976458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.351 [2024-11-20 14:51:58.976491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.351 qpair failed and we were unable to recover it. 00:32:47.351 [2024-11-20 14:51:58.976631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.351 [2024-11-20 14:51:58.976663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.351 qpair failed and we were unable to recover it. 00:32:47.351 [2024-11-20 14:51:58.976904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.351 [2024-11-20 14:51:58.976936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.351 qpair failed and we were unable to recover it. 00:32:47.351 [2024-11-20 14:51:58.977081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.351 [2024-11-20 14:51:58.977112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.351 qpair failed and we were unable to recover it. 00:32:47.351 [2024-11-20 14:51:58.977251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.351 [2024-11-20 14:51:58.977283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.351 qpair failed and we were unable to recover it. 00:32:47.351 [2024-11-20 14:51:58.977416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.351 [2024-11-20 14:51:58.977447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.351 qpair failed and we were unable to recover it. 00:32:47.351 [2024-11-20 14:51:58.977567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.351 [2024-11-20 14:51:58.977598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.351 qpair failed and we were unable to recover it. 00:32:47.352 [2024-11-20 14:51:58.977817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.352 [2024-11-20 14:51:58.977850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.352 qpair failed and we were unable to recover it. 00:32:47.352 [2024-11-20 14:51:58.978040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.352 [2024-11-20 14:51:58.978073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.352 qpair failed and we were unable to recover it. 00:32:47.352 [2024-11-20 14:51:58.978259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.352 [2024-11-20 14:51:58.978292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.352 qpair failed and we were unable to recover it. 00:32:47.352 [2024-11-20 14:51:58.978425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.352 [2024-11-20 14:51:58.978457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.352 qpair failed and we were unable to recover it. 00:32:47.352 [2024-11-20 14:51:58.978722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.352 [2024-11-20 14:51:58.978753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.352 qpair failed and we were unable to recover it. 00:32:47.352 [2024-11-20 14:51:58.978934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.352 [2024-11-20 14:51:58.978977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.352 qpair failed and we were unable to recover it. 00:32:47.352 [2024-11-20 14:51:58.979129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.352 [2024-11-20 14:51:58.979164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.352 qpair failed and we were unable to recover it. 00:32:47.352 [2024-11-20 14:51:58.979345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.352 [2024-11-20 14:51:58.979376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.352 qpair failed and we were unable to recover it. 00:32:47.352 [2024-11-20 14:51:58.979548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.352 [2024-11-20 14:51:58.979579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.352 qpair failed and we were unable to recover it. 00:32:47.352 [2024-11-20 14:51:58.979822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.352 [2024-11-20 14:51:58.979853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.352 qpair failed and we were unable to recover it. 00:32:47.352 [2024-11-20 14:51:58.980103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.352 [2024-11-20 14:51:58.980137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.352 qpair failed and we were unable to recover it. 00:32:47.352 [2024-11-20 14:51:58.980315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.353 [2024-11-20 14:51:58.980348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.353 qpair failed and we were unable to recover it. 00:32:47.353 [2024-11-20 14:51:58.980467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.353 [2024-11-20 14:51:58.980498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.353 qpair failed and we were unable to recover it. 00:32:47.353 [2024-11-20 14:51:58.980677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.353 [2024-11-20 14:51:58.980709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.353 qpair failed and we were unable to recover it. 00:32:47.353 [2024-11-20 14:51:58.980888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.353 [2024-11-20 14:51:58.980919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.353 qpair failed and we were unable to recover it. 00:32:47.353 [2024-11-20 14:51:58.981171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.353 [2024-11-20 14:51:58.981226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.353 qpair failed and we were unable to recover it. 00:32:47.353 [2024-11-20 14:51:58.981563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.353 [2024-11-20 14:51:58.981632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.353 qpair failed and we were unable to recover it. 00:32:47.353 [2024-11-20 14:51:58.981937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.353 [2024-11-20 14:51:58.981986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.353 qpair failed and we were unable to recover it. 00:32:47.353 [2024-11-20 14:51:58.982180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.353 [2024-11-20 14:51:58.982212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.353 qpair failed and we were unable to recover it. 00:32:47.353 [2024-11-20 14:51:58.982337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.353 [2024-11-20 14:51:58.982383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.353 qpair failed and we were unable to recover it. 00:32:47.353 [2024-11-20 14:51:58.982592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.353 [2024-11-20 14:51:58.982623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.353 qpair failed and we were unable to recover it. 00:32:47.353 [2024-11-20 14:51:58.982816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.353 [2024-11-20 14:51:58.982847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.353 qpair failed and we were unable to recover it. 00:32:47.353 [2024-11-20 14:51:58.983061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.353 [2024-11-20 14:51:58.983095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.354 qpair failed and we were unable to recover it. 00:32:47.354 [2024-11-20 14:51:58.983201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.354 [2024-11-20 14:51:58.983232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.354 qpair failed and we were unable to recover it. 00:32:47.354 [2024-11-20 14:51:58.983425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.354 [2024-11-20 14:51:58.983456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.354 qpair failed and we were unable to recover it. 00:32:47.354 [2024-11-20 14:51:58.983696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.354 [2024-11-20 14:51:58.983727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.354 qpair failed and we were unable to recover it. 00:32:47.354 [2024-11-20 14:51:58.983930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.354 [2024-11-20 14:51:58.983969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.354 qpair failed and we were unable to recover it. 00:32:47.354 [2024-11-20 14:51:58.984082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.354 [2024-11-20 14:51:58.984113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.354 qpair failed and we were unable to recover it. 00:32:47.354 [2024-11-20 14:51:58.984373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.354 [2024-11-20 14:51:58.984404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.354 qpair failed and we were unable to recover it. 00:32:47.354 [2024-11-20 14:51:58.984571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.354 [2024-11-20 14:51:58.984602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.354 qpair failed and we were unable to recover it. 00:32:47.354 [2024-11-20 14:51:58.984767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.354 [2024-11-20 14:51:58.984798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.354 qpair failed and we were unable to recover it. 00:32:47.354 [2024-11-20 14:51:58.984936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.354 [2024-11-20 14:51:58.984979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.354 qpair failed and we were unable to recover it. 00:32:47.354 [2024-11-20 14:51:58.985200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.354 [2024-11-20 14:51:58.985232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.355 qpair failed and we were unable to recover it. 00:32:47.355 [2024-11-20 14:51:58.985424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.355 [2024-11-20 14:51:58.985455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.355 qpair failed and we were unable to recover it. 00:32:47.355 [2024-11-20 14:51:58.985646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.355 [2024-11-20 14:51:58.985677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.355 qpair failed and we were unable to recover it. 00:32:47.355 [2024-11-20 14:51:58.985921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.355 [2024-11-20 14:51:58.985962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.355 qpair failed and we were unable to recover it. 00:32:47.355 [2024-11-20 14:51:58.986205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.355 [2024-11-20 14:51:58.986235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.355 qpair failed and we were unable to recover it. 00:32:47.355 [2024-11-20 14:51:58.986488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.355 [2024-11-20 14:51:58.986519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.355 qpair failed and we were unable to recover it. 00:32:47.355 [2024-11-20 14:51:58.986656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.355 [2024-11-20 14:51:58.986686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.355 qpair failed and we were unable to recover it. 00:32:47.356 [2024-11-20 14:51:58.986809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.356 [2024-11-20 14:51:58.986841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.356 qpair failed and we were unable to recover it. 00:32:47.356 [2024-11-20 14:51:58.987083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.356 [2024-11-20 14:51:58.987116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.356 qpair failed and we were unable to recover it. 00:32:47.356 [2024-11-20 14:51:58.987286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.356 [2024-11-20 14:51:58.987318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.356 qpair failed and we were unable to recover it. 00:32:47.356 [2024-11-20 14:51:58.987526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.356 [2024-11-20 14:51:58.987556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.356 qpair failed and we were unable to recover it. 00:32:47.356 [2024-11-20 14:51:58.987676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.356 [2024-11-20 14:51:58.987707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.356 qpair failed and we were unable to recover it. 00:32:47.356 [2024-11-20 14:51:58.987874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.356 [2024-11-20 14:51:58.987906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.356 qpair failed and we were unable to recover it. 00:32:47.356 [2024-11-20 14:51:58.988157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.356 [2024-11-20 14:51:58.988190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.356 qpair failed and we were unable to recover it. 00:32:47.356 [2024-11-20 14:51:58.988459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.356 [2024-11-20 14:51:58.988491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.356 qpair failed and we were unable to recover it. 00:32:47.356 [2024-11-20 14:51:58.988690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.357 [2024-11-20 14:51:58.988722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.357 qpair failed and we were unable to recover it. 00:32:47.357 [2024-11-20 14:51:58.988915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.357 [2024-11-20 14:51:58.988946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.357 qpair failed and we were unable to recover it. 00:32:47.357 [2024-11-20 14:51:58.989086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.357 [2024-11-20 14:51:58.989117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.357 qpair failed and we were unable to recover it. 00:32:47.357 [2024-11-20 14:51:58.989241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.357 [2024-11-20 14:51:58.989271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.357 qpair failed and we were unable to recover it. 00:32:47.357 [2024-11-20 14:51:58.989442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.359 [2024-11-20 14:51:58.989474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.359 qpair failed and we were unable to recover it. 00:32:47.359 [2024-11-20 14:51:58.989717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.359 [2024-11-20 14:51:58.989747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.359 qpair failed and we were unable to recover it. 00:32:47.359 [2024-11-20 14:51:58.989929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.359 [2024-11-20 14:51:58.989968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.359 qpair failed and we were unable to recover it. 00:32:47.359 [2024-11-20 14:51:58.990095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.359 [2024-11-20 14:51:58.990127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.359 qpair failed and we were unable to recover it. 00:32:47.359 [2024-11-20 14:51:58.990248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.359 [2024-11-20 14:51:58.990279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.359 qpair failed and we were unable to recover it. 00:32:47.359 [2024-11-20 14:51:58.990544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.359 [2024-11-20 14:51:58.990576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.359 qpair failed and we were unable to recover it. 00:32:47.359 [2024-11-20 14:51:58.990761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.359 [2024-11-20 14:51:58.990792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.359 qpair failed and we were unable to recover it. 00:32:47.359 [2024-11-20 14:51:58.990986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.359 [2024-11-20 14:51:58.991020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.359 qpair failed and we were unable to recover it. 00:32:47.359 [2024-11-20 14:51:58.991146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.359 [2024-11-20 14:51:58.991182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.360 qpair failed and we were unable to recover it. 00:32:47.360 [2024-11-20 14:51:58.991350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.360 [2024-11-20 14:51:58.991382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.360 qpair failed and we were unable to recover it. 00:32:47.360 [2024-11-20 14:51:58.991558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.360 [2024-11-20 14:51:58.991589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.360 qpair failed and we were unable to recover it. 00:32:47.360 [2024-11-20 14:51:58.991781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.360 [2024-11-20 14:51:58.991812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.360 qpair failed and we were unable to recover it. 00:32:47.360 [2024-11-20 14:51:58.992001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.360 [2024-11-20 14:51:58.992034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.360 qpair failed and we were unable to recover it. 00:32:47.360 [2024-11-20 14:51:58.992223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.360 [2024-11-20 14:51:58.992255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.360 qpair failed and we were unable to recover it. 00:32:47.360 [2024-11-20 14:51:58.992383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.360 [2024-11-20 14:51:58.992413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.360 qpair failed and we were unable to recover it. 00:32:47.360 [2024-11-20 14:51:58.992538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.360 [2024-11-20 14:51:58.992570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.361 qpair failed and we were unable to recover it. 00:32:47.361 [2024-11-20 14:51:58.992748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.361 [2024-11-20 14:51:58.992779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.361 qpair failed and we were unable to recover it. 00:32:47.361 [2024-11-20 14:51:58.993039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.361 [2024-11-20 14:51:58.993072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.361 qpair failed and we were unable to recover it. 00:32:47.361 [2024-11-20 14:51:58.993278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.361 [2024-11-20 14:51:58.993308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.361 qpair failed and we were unable to recover it. 00:32:47.361 [2024-11-20 14:51:58.993492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.361 [2024-11-20 14:51:58.993522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.361 qpair failed and we were unable to recover it. 00:32:47.361 [2024-11-20 14:51:58.993780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.361 [2024-11-20 14:51:58.993811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.361 qpair failed and we were unable to recover it. 00:32:47.361 [2024-11-20 14:51:58.993984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.361 [2024-11-20 14:51:58.994018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.361 qpair failed and we were unable to recover it. 00:32:47.361 [2024-11-20 14:51:58.994229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.361 [2024-11-20 14:51:58.994260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.361 qpair failed and we were unable to recover it. 00:32:47.361 [2024-11-20 14:51:58.994513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.361 [2024-11-20 14:51:58.994544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.361 qpair failed and we were unable to recover it. 00:32:47.361 [2024-11-20 14:51:58.994731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.361 [2024-11-20 14:51:58.994761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.361 qpair failed and we were unable to recover it. 00:32:47.361 [2024-11-20 14:51:58.994937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.361 [2024-11-20 14:51:58.994980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.361 qpair failed and we were unable to recover it. 00:32:47.361 [2024-11-20 14:51:58.995117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.361 [2024-11-20 14:51:58.995148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.361 qpair failed and we were unable to recover it. 00:32:47.361 [2024-11-20 14:51:58.995384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.361 [2024-11-20 14:51:58.995416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.361 qpair failed and we were unable to recover it. 00:32:47.361 [2024-11-20 14:51:58.995684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.361 [2024-11-20 14:51:58.995719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.361 qpair failed and we were unable to recover it. 00:32:47.361 [2024-11-20 14:51:58.995920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.361 [2024-11-20 14:51:58.995963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.361 qpair failed and we were unable to recover it. 00:32:47.361 [2024-11-20 14:51:58.996154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.361 [2024-11-20 14:51:58.996186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.361 qpair failed and we were unable to recover it. 00:32:47.361 [2024-11-20 14:51:58.996387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.361 [2024-11-20 14:51:58.996418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.361 qpair failed and we were unable to recover it. 00:32:47.361 [2024-11-20 14:51:58.996655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.361 [2024-11-20 14:51:58.996686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.361 qpair failed and we were unable to recover it. 00:32:47.361 [2024-11-20 14:51:58.996873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.361 [2024-11-20 14:51:58.996905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.362 qpair failed and we were unable to recover it. 00:32:47.362 [2024-11-20 14:51:58.997045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.362 [2024-11-20 14:51:58.997077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.362 qpair failed and we were unable to recover it. 00:32:47.362 [2024-11-20 14:51:58.997324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.362 [2024-11-20 14:51:58.997355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.362 qpair failed and we were unable to recover it. 00:32:47.362 [2024-11-20 14:51:58.997524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.362 [2024-11-20 14:51:58.997554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.362 qpair failed and we were unable to recover it. 00:32:47.362 [2024-11-20 14:51:58.997814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.362 [2024-11-20 14:51:58.997844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.362 qpair failed and we were unable to recover it. 00:32:47.362 [2024-11-20 14:51:58.997979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.362 [2024-11-20 14:51:58.998012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.362 qpair failed and we were unable to recover it. 00:32:47.362 [2024-11-20 14:51:58.998272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.362 [2024-11-20 14:51:58.998303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.362 qpair failed and we were unable to recover it. 00:32:47.362 [2024-11-20 14:51:58.998581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.362 [2024-11-20 14:51:58.998612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.362 qpair failed and we were unable to recover it. 00:32:47.362 [2024-11-20 14:51:58.998762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.362 [2024-11-20 14:51:58.998793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.362 qpair failed and we were unable to recover it. 00:32:47.362 [2024-11-20 14:51:58.998977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.362 [2024-11-20 14:51:58.999010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.362 qpair failed and we were unable to recover it. 00:32:47.362 [2024-11-20 14:51:58.999193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.363 [2024-11-20 14:51:58.999223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.363 qpair failed and we were unable to recover it. 00:32:47.363 [2024-11-20 14:51:58.999357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.363 [2024-11-20 14:51:58.999388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.363 qpair failed and we were unable to recover it. 00:32:47.363 [2024-11-20 14:51:58.999495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.363 [2024-11-20 14:51:58.999526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.363 qpair failed and we were unable to recover it. 00:32:47.363 [2024-11-20 14:51:58.999695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.363 [2024-11-20 14:51:58.999725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.363 qpair failed and we were unable to recover it. 00:32:47.363 [2024-11-20 14:51:59.000007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.363 [2024-11-20 14:51:59.000041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.363 qpair failed and we were unable to recover it. 00:32:47.363 [2024-11-20 14:51:59.000300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.363 [2024-11-20 14:51:59.000336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.363 qpair failed and we were unable to recover it. 00:32:47.363 [2024-11-20 14:51:59.000524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.363 [2024-11-20 14:51:59.000555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.363 qpair failed and we were unable to recover it. 00:32:47.363 [2024-11-20 14:51:59.000756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.363 [2024-11-20 14:51:59.000788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.363 qpair failed and we were unable to recover it. 00:32:47.363 [2024-11-20 14:51:59.000997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.363 [2024-11-20 14:51:59.001031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.363 qpair failed and we were unable to recover it. 00:32:47.363 [2024-11-20 14:51:59.001220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.363 [2024-11-20 14:51:59.001254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.363 qpair failed and we were unable to recover it. 00:32:47.363 [2024-11-20 14:51:59.001377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.363 [2024-11-20 14:51:59.001407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.363 qpair failed and we were unable to recover it. 00:32:47.363 [2024-11-20 14:51:59.001547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.363 [2024-11-20 14:51:59.001577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.363 qpair failed and we were unable to recover it. 00:32:47.363 [2024-11-20 14:51:59.001757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.363 [2024-11-20 14:51:59.001787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.363 qpair failed and we were unable to recover it. 00:32:47.363 [2024-11-20 14:51:59.002048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.363 [2024-11-20 14:51:59.002081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.363 qpair failed and we were unable to recover it. 00:32:47.364 [2024-11-20 14:51:59.002319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.364 [2024-11-20 14:51:59.002350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.364 qpair failed and we were unable to recover it. 00:32:47.364 [2024-11-20 14:51:59.002542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.364 [2024-11-20 14:51:59.002573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.364 qpair failed and we were unable to recover it. 00:32:47.364 [2024-11-20 14:51:59.002842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.364 [2024-11-20 14:51:59.002872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.364 qpair failed and we were unable to recover it. 00:32:47.364 [2024-11-20 14:51:59.002990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.364 [2024-11-20 14:51:59.003022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.364 qpair failed and we were unable to recover it. 00:32:47.364 [2024-11-20 14:51:59.003209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.364 [2024-11-20 14:51:59.003240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.364 qpair failed and we were unable to recover it. 00:32:47.364 [2024-11-20 14:51:59.003426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.364 [2024-11-20 14:51:59.003458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.364 qpair failed and we were unable to recover it. 00:32:47.364 [2024-11-20 14:51:59.003716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.364 [2024-11-20 14:51:59.003747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.364 qpair failed and we were unable to recover it. 00:32:47.364 [2024-11-20 14:51:59.003855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.364 [2024-11-20 14:51:59.003887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.364 qpair failed and we were unable to recover it. 00:32:47.365 [2024-11-20 14:51:59.004029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.365 [2024-11-20 14:51:59.004062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.365 qpair failed and we were unable to recover it. 00:32:47.365 [2024-11-20 14:51:59.004256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.365 [2024-11-20 14:51:59.004287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.365 qpair failed and we were unable to recover it. 00:32:47.365 [2024-11-20 14:51:59.004481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.365 [2024-11-20 14:51:59.004512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.365 qpair failed and we were unable to recover it. 00:32:47.365 [2024-11-20 14:51:59.004623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.365 [2024-11-20 14:51:59.004653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.365 qpair failed and we were unable to recover it. 00:32:47.365 [2024-11-20 14:51:59.004851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.365 [2024-11-20 14:51:59.004881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.365 qpair failed and we were unable to recover it. 00:32:47.365 [2024-11-20 14:51:59.005004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.365 [2024-11-20 14:51:59.005037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.365 qpair failed and we were unable to recover it. 00:32:47.365 [2024-11-20 14:51:59.005214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.365 [2024-11-20 14:51:59.005244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.365 qpair failed and we were unable to recover it. 00:32:47.365 [2024-11-20 14:51:59.005506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.365 [2024-11-20 14:51:59.005538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.365 qpair failed and we were unable to recover it. 00:32:47.365 [2024-11-20 14:51:59.005748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.365 [2024-11-20 14:51:59.005779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.365 qpair failed and we were unable to recover it. 00:32:47.365 [2024-11-20 14:51:59.005958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.365 [2024-11-20 14:51:59.005991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.365 qpair failed and we were unable to recover it. 00:32:47.365 [2024-11-20 14:51:59.006112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.365 [2024-11-20 14:51:59.006143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.365 qpair failed and we were unable to recover it. 00:32:47.365 [2024-11-20 14:51:59.006320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.365 [2024-11-20 14:51:59.006350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.365 qpair failed and we were unable to recover it. 00:32:47.365 [2024-11-20 14:51:59.006480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.365 [2024-11-20 14:51:59.006512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.365 qpair failed and we were unable to recover it. 00:32:47.365 [2024-11-20 14:51:59.006740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.365 [2024-11-20 14:51:59.006772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.365 qpair failed and we were unable to recover it. 00:32:47.365 [2024-11-20 14:51:59.006969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.365 [2024-11-20 14:51:59.007001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.365 qpair failed and we were unable to recover it. 00:32:47.365 [2024-11-20 14:51:59.007121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.365 [2024-11-20 14:51:59.007152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.365 qpair failed and we were unable to recover it. 00:32:47.365 [2024-11-20 14:51:59.007333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.366 [2024-11-20 14:51:59.007365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.366 qpair failed and we were unable to recover it. 00:32:47.366 [2024-11-20 14:51:59.007547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.366 [2024-11-20 14:51:59.007578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.366 qpair failed and we were unable to recover it. 00:32:47.366 [2024-11-20 14:51:59.007844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.366 [2024-11-20 14:51:59.007876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.366 qpair failed and we were unable to recover it. 00:32:47.366 [2024-11-20 14:51:59.008051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.366 [2024-11-20 14:51:59.008084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.366 qpair failed and we were unable to recover it. 00:32:47.366 [2024-11-20 14:51:59.008268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.366 [2024-11-20 14:51:59.008298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.366 qpair failed and we were unable to recover it. 00:32:47.366 [2024-11-20 14:51:59.008415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.366 [2024-11-20 14:51:59.008448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.366 qpair failed and we were unable to recover it. 00:32:47.366 [2024-11-20 14:51:59.008642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.366 [2024-11-20 14:51:59.008673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.366 qpair failed and we were unable to recover it. 00:32:47.366 [2024-11-20 14:51:59.008858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.366 [2024-11-20 14:51:59.008896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.366 qpair failed and we were unable to recover it. 00:32:47.366 [2024-11-20 14:51:59.009098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.366 [2024-11-20 14:51:59.009131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.366 qpair failed and we were unable to recover it. 00:32:47.366 [2024-11-20 14:51:59.009313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.366 [2024-11-20 14:51:59.009345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.366 qpair failed and we were unable to recover it. 00:32:47.366 [2024-11-20 14:51:59.009556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.366 [2024-11-20 14:51:59.009586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.366 qpair failed and we were unable to recover it. 00:32:47.366 [2024-11-20 14:51:59.009795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.366 [2024-11-20 14:51:59.009826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.366 qpair failed and we were unable to recover it. 00:32:47.366 [2024-11-20 14:51:59.009956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.366 [2024-11-20 14:51:59.009988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.366 qpair failed and we were unable to recover it. 00:32:47.366 [2024-11-20 14:51:59.010178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.366 [2024-11-20 14:51:59.010210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.366 qpair failed and we were unable to recover it. 00:32:47.366 [2024-11-20 14:51:59.010396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.366 [2024-11-20 14:51:59.010427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.366 qpair failed and we were unable to recover it. 00:32:47.366 [2024-11-20 14:51:59.010637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.366 [2024-11-20 14:51:59.010668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.367 qpair failed and we were unable to recover it. 00:32:47.367 [2024-11-20 14:51:59.010917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.367 [2024-11-20 14:51:59.010957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.367 qpair failed and we were unable to recover it. 00:32:47.367 [2024-11-20 14:51:59.011164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.367 [2024-11-20 14:51:59.011194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.367 qpair failed and we were unable to recover it. 00:32:47.367 [2024-11-20 14:51:59.011454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.367 [2024-11-20 14:51:59.011485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.367 qpair failed and we were unable to recover it. 00:32:47.367 [2024-11-20 14:51:59.011666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.367 [2024-11-20 14:51:59.011697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.367 qpair failed and we were unable to recover it. 00:32:47.367 [2024-11-20 14:51:59.011893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.367 [2024-11-20 14:51:59.011924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.367 qpair failed and we were unable to recover it. 00:32:47.367 [2024-11-20 14:51:59.012116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.367 [2024-11-20 14:51:59.012148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.367 qpair failed and we were unable to recover it. 00:32:47.367 [2024-11-20 14:51:59.012324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.367 [2024-11-20 14:51:59.012354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.367 qpair failed and we were unable to recover it. 00:32:47.367 [2024-11-20 14:51:59.012593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.367 [2024-11-20 14:51:59.012625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.367 qpair failed and we were unable to recover it. 00:32:47.367 [2024-11-20 14:51:59.012808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.367 [2024-11-20 14:51:59.012839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.367 qpair failed and we were unable to recover it. 00:32:47.367 [2024-11-20 14:51:59.012980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.367 [2024-11-20 14:51:59.013013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.367 qpair failed and we were unable to recover it. 00:32:47.367 [2024-11-20 14:51:59.013271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.367 [2024-11-20 14:51:59.013302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.367 qpair failed and we were unable to recover it. 00:32:47.367 [2024-11-20 14:51:59.013488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.367 [2024-11-20 14:51:59.013519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.367 qpair failed and we were unable to recover it. 00:32:47.367 [2024-11-20 14:51:59.013730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.367 [2024-11-20 14:51:59.013761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.367 qpair failed and we were unable to recover it. 00:32:47.367 [2024-11-20 14:51:59.013985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.367 [2024-11-20 14:51:59.014017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.367 qpair failed and we were unable to recover it. 00:32:47.367 [2024-11-20 14:51:59.014130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.367 [2024-11-20 14:51:59.014163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.367 qpair failed and we were unable to recover it. 00:32:47.367 [2024-11-20 14:51:59.014282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.367 [2024-11-20 14:51:59.014314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.367 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.014488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.014518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.014726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.014756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.014884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.014916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.015127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.015160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.015423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.015454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.015596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.015626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.015857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.015888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.016038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.016071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.016242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.016273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.016456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.016487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.016623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.016655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.016854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.016886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.017070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.017102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.017289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.017321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.017523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.017554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.017725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.017762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.017977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.018010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.018120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.018152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.018321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.018352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.018559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.018590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.018769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.018800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.019006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.019038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.019236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.019267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.019372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.019404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.019593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.019624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.019863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.368 [2024-11-20 14:51:59.019894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.368 qpair failed and we were unable to recover it. 00:32:47.368 [2024-11-20 14:51:59.020018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.020050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.020317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.020348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.020535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.020567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.020763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.020796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.020922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.020960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.021162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.021194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.021376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.021407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.021586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.021617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.021747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.021777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.021959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.021992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.022184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.022214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.022338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.022369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.022560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.022591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.022851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.022882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.023016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.023048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.023241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.023273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.023490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.023522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.023636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.023668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.023858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.023888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.024157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.024190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.024396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.024427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.024552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.024582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.024712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.024744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.024924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.024977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.025169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.025200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.025334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.025365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.025621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.025651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.025836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.025867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.026060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.026093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.026280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.026322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.026536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.026567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.026693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.026724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.026844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.026876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.027075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.027109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.027291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.027322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.027503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.027535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.027661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.027691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.027876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.027908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.028164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.028196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.028440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.028472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.028721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.028753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.028937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.028978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.029172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.029204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.029473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.029506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.029714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.029746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.029844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.029873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.030056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.030089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.030281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.030312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.030434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.030466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.030645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.030676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.369 [2024-11-20 14:51:59.030787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.369 [2024-11-20 14:51:59.030819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.369 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.031055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.031087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.031324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.031355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.031469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.031500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.031621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.031652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.031894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.031926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.032056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.032088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.032210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.032241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.032484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.032515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.032708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.032740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.032918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.032975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.033153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.033185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.033376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.033407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.033584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.033616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.033796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.033826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.033998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.034030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.034232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.034264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.034473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.034504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.034619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.034649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.034820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.034858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.034970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.035003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.035116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.035147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.035319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.035351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.035615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.035646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.035966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.035999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.036118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.036151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.036277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.036308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.036472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.036504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.036627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.036658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.036843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.036876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.037058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.037090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.037270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.037300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.037481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.037513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.037711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.037743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.037959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.037992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.038186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.038218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.038459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.038490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.038663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.038694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.038890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.038921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.039170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.039202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.039389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.039420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.039686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.039716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.039980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.040014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.040258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.040289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.040458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.040490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.040752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.040784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.040991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.041025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.041215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.041246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.041428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.041459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.041651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.041682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.041944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.041985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.042198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.042230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.042360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.042390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.042493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.042525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.042712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.042743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.042858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.042890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.043096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.043128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.043333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.043365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.043481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.043512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.043705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.043743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.044025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.044058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.044272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.044304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.044579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.044610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.044804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.044835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.044989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.045022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.045219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.045251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.045512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.045543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.045679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.045710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.045899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.045932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.046137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.046169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.046411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.046442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.370 qpair failed and we were unable to recover it. 00:32:47.370 [2024-11-20 14:51:59.046573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.370 [2024-11-20 14:51:59.046604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.046868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.046899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.047122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.047154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.047269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.047300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.047489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.047520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.047774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.047804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.048050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.048083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.048206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.048238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.048511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.048542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.048802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.048832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.049004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.049037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.049291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.049322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.049507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.049539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.049721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.049752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.050034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.050066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.050371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.050402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.050586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.050618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.050883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.050914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.051055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.051087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.051209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.051241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.051412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.051443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.051568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.051599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.051786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.051818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.052071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.052104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.052282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.052313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.052501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.052534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.052702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.052732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.052992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.053027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.053145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.053183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.053375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.053408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.053589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.053620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.053854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.053886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.054127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.054159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.054424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.054455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.054653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.054684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.054876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.054907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.055107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.055139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.055400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.055432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.055564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.055595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.055716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.055747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.055938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.055978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.056102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.056133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.056316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.056347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.056477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.056509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.056678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.056710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.056946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.056987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.057261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.057292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.057472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.057504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.057687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.057718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.057896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.057926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.058125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.058157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.058399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.058430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.058608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.058639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.058922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.058964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.059245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.059275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.059637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.059709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.059967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.060006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.060254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.060288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.060553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.060585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.060773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.060805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.061005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.061039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.061252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.061284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.061454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.061487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.061743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.061775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.062028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.062061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.062269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.062303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.062566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.062599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.062734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.062767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.062961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.063004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.063271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.063305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.063573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.063606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.371 [2024-11-20 14:51:59.063730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.371 [2024-11-20 14:51:59.063762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.371 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.063899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.063931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.064126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.064158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.064419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.064451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.064629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.064661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.064761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.064793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.064978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.065012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.065192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.065225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.065490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.065524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.065635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.065666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.065856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.065889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.066089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.066123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.066332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.066364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.066554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.066587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.066724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.066757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.066935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.066977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.067081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.067114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.067294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.067327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.067466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.067498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.067687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.067719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.067913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.067945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.068081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.068113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.068306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.068339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.068527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.068560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.068668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.068701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.068925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.068970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.069146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.069179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.069371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.069403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.069581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.069613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.069804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.069836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.070009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.070044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.070287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.070320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.070505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.070538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.070653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.070685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.070866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.070899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.071090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.071122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.071361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.071393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.071570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.071608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.071875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.071907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.072103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.072136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.072354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.072386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.072504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.072535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.072703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.072734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.072860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.072892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.073091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.073124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.073295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.073327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.073501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.073532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.073645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.073677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.073933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.073983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.074227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.074259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.074442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.074474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.074674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.074706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.074802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.074833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.075095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.075129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.075312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.075345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.075543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.075575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.075837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.075869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.076039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.076072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.076258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.076290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.076471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.076502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.076665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.076697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.076885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.076917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.077052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.077084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.077218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.077250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.077576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.077649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.077929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.077980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.078272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.372 [2024-11-20 14:51:59.078305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.372 qpair failed and we were unable to recover it. 00:32:47.372 [2024-11-20 14:51:59.078495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.078529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.078735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.078767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.078905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.078937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.079161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.079193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.079313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.079344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.079535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.079566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.079745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.079777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.079903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.079934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.080158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.080191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.080392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.080423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.080597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.080629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.080830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.080862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.081050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.081084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.081295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.081327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.081453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.081484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.081592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.081625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.081804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.081836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.082077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.082110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.082213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.082245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.082449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.082481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.082589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.082621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.082804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.082836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.083025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.083058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.083315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.083345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.083517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.083554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.083804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.083835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.084026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.084058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.084287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.084318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.084553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.084584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.084699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.084730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.084978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.085010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.085277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.085308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.085473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.085504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.085687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.085719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.085889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.085919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.086186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.086218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.086407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.086438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.086548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.086579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.086850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.086883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.087069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.087100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.087219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.087251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.087446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.087478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.087726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.087757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.087925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.087963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.088071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.088103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.088305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.088337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.088574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.088606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.088816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.088846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.089015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.089049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.089227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.089259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.089443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.089473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.089731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.089761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.089895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.089927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.090118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.090149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.090317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.090347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.090593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.090624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.090807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.090837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.091061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.091093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.091336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.091367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.091494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.091525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.091761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.091792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.092047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.092079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.092215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.092246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.092448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.092480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.092755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.092787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.092970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.093005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.093195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.093227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.093410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.093440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.093558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.093590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.093848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.093879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.094000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.094031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.094296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.094327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.094563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.373 [2024-11-20 14:51:59.094595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.373 qpair failed and we were unable to recover it. 00:32:47.373 [2024-11-20 14:51:59.094728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.094759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.094946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.094986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.095170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.095201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.095386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.095416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.095540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.095572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.095693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.095724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.095971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.096004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.096210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.096241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.096479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.096511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.096636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.096668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.096850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.096881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.097092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.097126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.097317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.097349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.097468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.097499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.097668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.097700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.097988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.098021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.098192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.098223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.098410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.098441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.098674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.098705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.098911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.098956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.099142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.099174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.099408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.099439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.099703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.099734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.099939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.099979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.100112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.100144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.100391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.100421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.100602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.100634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.100806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.100837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.101073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.101106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.101348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.101380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.101519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.101551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.101719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.101750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.101922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.101962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.102074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.102107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.102213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.102244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.102451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.102483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.102612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.102644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.102826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.102857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.103062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.103093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.103267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.103298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.103483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.103513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.103820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.103852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.104041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.104073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.104333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.104364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.104571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.104602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.104715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.104747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.104923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.104965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.105145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.105176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.105347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.105379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.105642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.105673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.105859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.105889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.106016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.106049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.106260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.106290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.106465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.106496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.106612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.106643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.106833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.106864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.107048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.107080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.107283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.107314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.107513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.107545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.107809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.107841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.108033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.108071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.108187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.108219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.108389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.108421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.108682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.108713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.108979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.109010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.109210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.109240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.109377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.374 [2024-11-20 14:51:59.109407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.374 qpair failed and we were unable to recover it. 00:32:47.374 [2024-11-20 14:51:59.109538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.109568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.109752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.109783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.109959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.109992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.110163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.110195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.110323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.110353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.110526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.110558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.110795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.110827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.111073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.111105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.111209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.111240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.111416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.111447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.111619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.111651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.111821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.111852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.112020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.112053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.112224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.112255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.112445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.112476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.112602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.112634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.112806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.112837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.113073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.113106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.113237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.113268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.113442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.113473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.113673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.113710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.113845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.113876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.114159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.114192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.114381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.114412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.114542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.114573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.114763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.114795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.115078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.115110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.115229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.115261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.115381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.115413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.115612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.115644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.115880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.115911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.116191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.116224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.116356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.116387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.116569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.116599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.116786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.116818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.117009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.117041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.117307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.117337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.117451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.117482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.117661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.117693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.117921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.117960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.118144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.118175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.118344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.118375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.118505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.118536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.118731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.118762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.118933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.118976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.119160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.119191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.119315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.119347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.119613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.119644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.119832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.119864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.120038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.120071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.120190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.120221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.120486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.120517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.120686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.120717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.120898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.120928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.121057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.121090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.121302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.121333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.121512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.121543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.121727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.121758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.121877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.121908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.122090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.122121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.122361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.122391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.122651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.122689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.122903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.122934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.123113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.123143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.123334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.123366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.123553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.123583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.123789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.123819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.123991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.124023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.124253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.124284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.124530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.124562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.124680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.124711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.124976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.125009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.125192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.125223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.375 qpair failed and we were unable to recover it. 00:32:47.375 [2024-11-20 14:51:59.125463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.375 [2024-11-20 14:51:59.125494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.125662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.125693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.125914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.125946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.126127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.126157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.126402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.126433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.126623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.126653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.126852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.126883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.127115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.127147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.127354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.127385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.127571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.127602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.127773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.127803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.127984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.128016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.128276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.128306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.128436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.128467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.128652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.128683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.128800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.128837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.129101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.129132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.129240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.129271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.129440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.129472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.129644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.129675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.129794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.129825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.130025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.130056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.130223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.130253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.130438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.130468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.130666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.130698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.130887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.130918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.131034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.131066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.131248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.131279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.131482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.131512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.131776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.131807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.131974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.132006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.132264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.132296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.132534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.132565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.132767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.132799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.132987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.133019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.133194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.133226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.133404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.133435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.133616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.133647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.133765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.133797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.134064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.134096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.134267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.134298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.134484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.134514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.134634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.134665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.134855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.134886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.135072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.135103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.135205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.135236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.135367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.135399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.135570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.135600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.135786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.135818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.136115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.136148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.136256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.136287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.136464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.136494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.136605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.136637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.136820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.136850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.137035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.137067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.137304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.137335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.137462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.137499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.137682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.137712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.137956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.137989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.138182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.138212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.138396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.138427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.138598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.138630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.138733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.138763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.138963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.138994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.139231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.139262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.139376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.139407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.139589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.376 [2024-11-20 14:51:59.139619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.376 qpair failed and we were unable to recover it. 00:32:47.376 [2024-11-20 14:51:59.139829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.139859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.139983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.140015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.140252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.140282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.140468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.140500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.140631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.140663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.140841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.140871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.140986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.141018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.141260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.141291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.141552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.141584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.141763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.141794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.141966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.141998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.142180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.142211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.142415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.142447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.142631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.142662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.142846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.142877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.143064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.143096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.143276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.143307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.143530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.143561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.143734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.143764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.143991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.144023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.147091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.147126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.147407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.147439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.147612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.147642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.147876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.147906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.148174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.148206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.148443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.148474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.148598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.148630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.148732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.148762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.148943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.148985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.149188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.149218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.149412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.149443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.149627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.149658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.149771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.149802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.150022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.150056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.150316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.150348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.150464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.150495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.150731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.150761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.150934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.150975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.151212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.151242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.151427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.151458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.151645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.151676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.151857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.151888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.152133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.152164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.152290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.152321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.152581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.152614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.152746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.152776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.152913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.152944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.153134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.153165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.153401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.153432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.153698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.153729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.153983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.154015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.154183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.154213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.154436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.154467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.154745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.154777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.154896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.154927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.155144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.155176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.155294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.155326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.155511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.155548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.155731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.155762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.155959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.155991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.156203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.156234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.156472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.156503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.156748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.156778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.157041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.157073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.157259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.157291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.157424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.157455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.157760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.157790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.157972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.158005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.158182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.158213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.158391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.158422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.158591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.158621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.377 [2024-11-20 14:51:59.158743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.377 [2024-11-20 14:51:59.158773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.377 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.159051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.159082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.159253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.159283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.159545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.159576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.159769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.159801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.160059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.160090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.160233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.160264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.160450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.160482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.160774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.160805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.160973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.161004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.161173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.161203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.161329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.161360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.161595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.161625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.161796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.161827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.161970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.162004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.162120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.162151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.162341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.162381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.162555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.162587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.162784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.162816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.162971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.163004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.163216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.163247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.163421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.163452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.163583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.163615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.163741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.163772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.163958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.163992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.164173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.164204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.164338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.164368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.164535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.164606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.164810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.164846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.165058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.165094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.165225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.165258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.165374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.165405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.165596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.165628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.165817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.165849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.166032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.166065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.166200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.166231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.166370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.166402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.166506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.166538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.166781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.166812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.167079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.167113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.167288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.167330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.167534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.167566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.167758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.167790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.168010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.168045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.168233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.168263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.168387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.168418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.168685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.168718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.168830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.168862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.169104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.169138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.169322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.169354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.169473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.169505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.169624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.169656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.169928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.169974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.170098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.170130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.170266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.170298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.170423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.170455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.170630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.170661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.170898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.170929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.171068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.171100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.171205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.171235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.171436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.171469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.171586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.171617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.171742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.171774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.171979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.172012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.172135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.172165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.172346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.172379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.172507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.172539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.172714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.172750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.172875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.172907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.173107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.173140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.173309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.173340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.173547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.173579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.173830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.173862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.378 qpair failed and we were unable to recover it. 00:32:47.378 [2024-11-20 14:51:59.174054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.378 [2024-11-20 14:51:59.174088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.174276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.174308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.174492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.174524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.174640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.174672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.174850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.174882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.175073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.175106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.175287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.175320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.175500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.175532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.175660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.175692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.175875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.175907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.176107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.176140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.176318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.176348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.176447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.176479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.176612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.176644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.176755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.176787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.176975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.177008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.177248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.177281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.177464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.177496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.177686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.177718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.177832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.177865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.177995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.178028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.178212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.178243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.178418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.178450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.178571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.178603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.178713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.178744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.178857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.178890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.179100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.179133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.179313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.179345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.179469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.179501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.179714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.179746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.179931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.179973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.180149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.180181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.180357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.180389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.180568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.180599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.180783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.180826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.181021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.181054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.181160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.181191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.181436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.181469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.181654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.181687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.181936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.181980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.182093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.182126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.182231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.182263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.182379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.182411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.182532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.182564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.182803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.182835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.182987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.183020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.183160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.183192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.183472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.183504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.183704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.183736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.183863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.183895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.184039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.184073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.184283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.184315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.184414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.184443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.184564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.184595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.184784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.184816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.185092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.185126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.185256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.185288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.185495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.185526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.185638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.185670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.185886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.185918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.186065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.186098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.186300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.186331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.186454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.186488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.186753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.186785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.186911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.186943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.187067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.187100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.379 qpair failed and we were unable to recover it. 00:32:47.379 [2024-11-20 14:51:59.187294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.379 [2024-11-20 14:51:59.187325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.187456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.187488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.187614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.187646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.187839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.187871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.188066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.188099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.188342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.188373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.188577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.188610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.188755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.188788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.188897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.188934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.189127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.189161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.189281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.189313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.189499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.189531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.189643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.189675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.189847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.189879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.190119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.190152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.190275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.190308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.190493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.190525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.190735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.190767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.190895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.190927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.191155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.191188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.191306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.191338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.191544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.191577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.191760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.191793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.191921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.191965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.192087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.192119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.192420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.192454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.192583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.192616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.192820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.192852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.193063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.193097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.193328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.193361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.193543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.193575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.193780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.193813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.193989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.194022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.194139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.194171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.194360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.194392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.194650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.194682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.194864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.194896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.195076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.195108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.195282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.195314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.195498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.195531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.195654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.195686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.195793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.195826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.195966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.195998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.196215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.196247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.196449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.196484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.196622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.196651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.196773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.196803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.196912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.196943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.197075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.197111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.197231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.197264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.197457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.197489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.197606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.197639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.197828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.197863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.198133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.198166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.198269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.198301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.198484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.198516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.198722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.198756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.198980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.199013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.199306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.199340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.199621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.199653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.199853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.199886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.200076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.200112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.200307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.200340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.200514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.200546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.200723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.200758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.200926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.200966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.201095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.201128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.201312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.201347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.201462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.201495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.201664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.201696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.201880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.201912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.202050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.380 [2024-11-20 14:51:59.202083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.380 qpair failed and we were unable to recover it. 00:32:47.380 [2024-11-20 14:51:59.202217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.202249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.202373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.202407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.202650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.202683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.202806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.202839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.203083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.203118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.203238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.203271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.203406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.203438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.203641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.203675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.203800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.203833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.204032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.204066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.204324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.204359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.204493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.204525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.206019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.206072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.206277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.206309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.206573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.206605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.206874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.206906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.207113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.207154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.207282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.207313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.207437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.207468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.207643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.207675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.207927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.207973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.208160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.208193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.208376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.208408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.208604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.208635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.208769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.208802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.209009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.209044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.209233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.209267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.209383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.209415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.209694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.209727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.209857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.209890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.210101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.210135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.210319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.210352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.210570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.210603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.210718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.210749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.210855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.210887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.211114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.211147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.211275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.211307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.211510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.211542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.211733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.211765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.211900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.211933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.212123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.212156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.212428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.212461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.212575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.212607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.212803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.212835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.212967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.213001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.213195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.213227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.213364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.213397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.213531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.213563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.213678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.213711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.213887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.213921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.214046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.214080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.214218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.214251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.214378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.214411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.214657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.214689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.214871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.214906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.215035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.215066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.215254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.215294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.215416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.215448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.215628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.215660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.215855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.215887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.216000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.216030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.381 [2024-11-20 14:51:59.216205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.381 [2024-11-20 14:51:59.216238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.381 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.216415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.216447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.216713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.216745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.216934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.216975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.217217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.217250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.217448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.217480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.217686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.217717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.217985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.218018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.218198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.218231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.218359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.218391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.218491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.218521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.218632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.218665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.218858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.218890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.219149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.219184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.219305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.219337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.219533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.219566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.219760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.219792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.219908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.219939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.220122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.220157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.220371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.220405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.220573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.220606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.220723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.220755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.220945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.220986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.221113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.221145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.221261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.221293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.221473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.221505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.221618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.221649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.221839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.221872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.221993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.222025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.222156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.222189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.222358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.222390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.222502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.222533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.222774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.222807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.222997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.223030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.223148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.223180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.223316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.223354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.223453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.223485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.223713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.223744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.223886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.223917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.224095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.224166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.224312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.224349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.224528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.224560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.224764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.224796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.224925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.224975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.225160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.225193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.225373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.225403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.225582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.225614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.225747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.225779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.225965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.225997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.226247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.226281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.226464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.226497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.226605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.226637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.226755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.226787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.227029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.227065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.227261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.227293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.227408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.227440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.227621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.227653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.227832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.227863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.227987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.228020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.228130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.228162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.228272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.228305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.228476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.228508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.228669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.228740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.228954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.228990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.229107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.229140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.229311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.229345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.229513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.229545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.229728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.229762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.229938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.229983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.382 qpair failed and we were unable to recover it. 00:32:47.382 [2024-11-20 14:51:59.230092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.382 [2024-11-20 14:51:59.230125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.230229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.230260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.230436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.230468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.230585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.230617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.230737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.230769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.230876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.230908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.231057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.231098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.231301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.231334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.231523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.231555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.231730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.231763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.231861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.231892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.232091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.232124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.232333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.232365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.232502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.232533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.232675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.232709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.232883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.232917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.233103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.233135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.233261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.233294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.233405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.233436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.233626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.233658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.233773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.233805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.234002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.234037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.234164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.234196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.234342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.234374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.234564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.234597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.234709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.234740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.234840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.234872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.235011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.235045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.235167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.235201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.235414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.235445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.235562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.235594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.235703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.235736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.235870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.235902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.236151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.236197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.236394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.236428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.236567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.236600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.236708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.236740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.236858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.236890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.237019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.237053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.237244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.237277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.237409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.237444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.237635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.237667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.237782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.237812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.238022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.238056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.238193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.238223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.238465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.238496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.238685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.238717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.238837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.238868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.239046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.239079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.239202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.239234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.239338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.239369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.239485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.239516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.239626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.239660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.239772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.239804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.239931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.239975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.240096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.240127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.240228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.240259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.240430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.240460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.240588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.240619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.240722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.240752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.240885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.240922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.241039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.241070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.241198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.241229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.241343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.241374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.241549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.241581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.241832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.241863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.241985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.242018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.242128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.242160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.383 [2024-11-20 14:51:59.242275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.383 [2024-11-20 14:51:59.242306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.383 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.242428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.242459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.242656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.242688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.242852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.242884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.243059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.243092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.243208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.243240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.243479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.243511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.243621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.243652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.243828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.243859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.244039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.244072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.244255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.244290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.244419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.244451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.244582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.244614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.244788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.244819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.244924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.244963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.245090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.245122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.245243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.245273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.245389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.245420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.245531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.245563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.245735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.245772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.245882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.245914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.246121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.246153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.246277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.246309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.246420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.246452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.246560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.246592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.246748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.246819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.247023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.247063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.247241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.247274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.247382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.247415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.247540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.247572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.247772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.247804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.247918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.247974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.248085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.248115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.248239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.248271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.248397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.248429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.248569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.248601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.248718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.248751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.248882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.248914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.249106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.249139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.249257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.249291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.249411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.249442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.249562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.249595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.249826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.249858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.249982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.250015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.250134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.250166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.250293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.250325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.250443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.250481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.250659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.250692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.250889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.250921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.251052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.251085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.251199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.251231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.251354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.251386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.251603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.251642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.251746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.251778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.251958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.251991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.252106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.252137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.252322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.252354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.252470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.252503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.252680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.252712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.252892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.252924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.253049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.253082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.253263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.253296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.253413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.253445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.253618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.253651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.253777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.253809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.253995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.254030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.384 [2024-11-20 14:51:59.254152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.384 [2024-11-20 14:51:59.254185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.384 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.254310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.254342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.254447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.254480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.254583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.254613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.254855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.254887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.255128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.255161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.255271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.255303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.255417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.255449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.255554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.255586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.255689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.255720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.255889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.255921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.256268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.256301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.256541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.256573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.256681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.256713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.256863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.256895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.257140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.257174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.257313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.257345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.257523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.257555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.257692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.257723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.257847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.257879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.258056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.258100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.258294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.258325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.258435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.258467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.258583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.258615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.258812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.258844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.259015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.259049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.259248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.259281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.259405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.259437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.259682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.259714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.259836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.259868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.260063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.260097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.260213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.260245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.260369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.260401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.260596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.260628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.260811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.260844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.260985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.261018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.261188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.261221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.261402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.261433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.261626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.261659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.261847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.261880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.262003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.262036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.262163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.262194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.262366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.262398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.262520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.262552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.262658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.262689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.262877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.262909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.263121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.263154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.263348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.263381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.263518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.263550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.263741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.263772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.263944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.263988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.264113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.264145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.264258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.264291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.264399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.264431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.264605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.264637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.264752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.264785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.264995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.265028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.265153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.265201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.265325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.265370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.265485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.265529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.265721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.265762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.265961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.265995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.266096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.266127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.266259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.266306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.266495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.266527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.266763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.266796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.266912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.266945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.267208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.267241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.267357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.267388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.385 [2024-11-20 14:51:59.267587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.385 [2024-11-20 14:51:59.267623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.385 qpair failed and we were unable to recover it. 00:32:47.386 [2024-11-20 14:51:59.267758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.386 [2024-11-20 14:51:59.267795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.386 qpair failed and we were unable to recover it. 00:32:47.386 [2024-11-20 14:51:59.267972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.386 [2024-11-20 14:51:59.268019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.386 qpair failed and we were unable to recover it. 00:32:47.386 [2024-11-20 14:51:59.268142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.386 [2024-11-20 14:51:59.268175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.386 qpair failed and we were unable to recover it. 00:32:47.386 [2024-11-20 14:51:59.268360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.386 [2024-11-20 14:51:59.268392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.386 qpair failed and we were unable to recover it. 00:32:47.386 [2024-11-20 14:51:59.268690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.386 [2024-11-20 14:51:59.268729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.386 qpair failed and we were unable to recover it. 00:32:47.386 [2024-11-20 14:51:59.268856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.386 [2024-11-20 14:51:59.268889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.386 qpair failed and we were unable to recover it. 00:32:47.386 [2024-11-20 14:51:59.269010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.386 [2024-11-20 14:51:59.269042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.386 qpair failed and we were unable to recover it. 00:32:47.386 [2024-11-20 14:51:59.269283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.386 [2024-11-20 14:51:59.269315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.386 qpair failed and we were unable to recover it. 00:32:47.386 [2024-11-20 14:51:59.269526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.386 [2024-11-20 14:51:59.269558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.386 qpair failed and we were unable to recover it. 00:32:47.386 [2024-11-20 14:51:59.269681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.386 [2024-11-20 14:51:59.269714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.386 qpair failed and we were unable to recover it. 00:32:47.386 [2024-11-20 14:51:59.269816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.386 [2024-11-20 14:51:59.269848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.386 qpair failed and we were unable to recover it. 00:32:47.386 [2024-11-20 14:51:59.269988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.386 [2024-11-20 14:51:59.270026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.386 qpair failed and we were unable to recover it. 00:32:47.386 [2024-11-20 14:51:59.270220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.386 [2024-11-20 14:51:59.270256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.386 qpair failed and we were unable to recover it. 00:32:47.386 [2024-11-20 14:51:59.270466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.386 [2024-11-20 14:51:59.270502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.386 qpair failed and we were unable to recover it. 00:32:47.386 [2024-11-20 14:51:59.270612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.386 [2024-11-20 14:51:59.270644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.386 qpair failed and we were unable to recover it. 00:32:47.386 [2024-11-20 14:51:59.270770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.386 [2024-11-20 14:51:59.270801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.386 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.271018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.271052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.271223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.271285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.271468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.271507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.271648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.271691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.271848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.271883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.272100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.272137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.272343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.272377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.272530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.272576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.272772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.272808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.272965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.273009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.273224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.273259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.273455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.273490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.273682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.273716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.273942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.273995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.274150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.274203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.274345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.274388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.274648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.274682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.274820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.274866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.275013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.275056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.275253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.275289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.675 qpair failed and we were unable to recover it. 00:32:47.675 [2024-11-20 14:51:59.275425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.675 [2024-11-20 14:51:59.275464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.275673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.275709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.275907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.275942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.276075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.276116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.276308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.276342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.276546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.276581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.276785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.276819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.276969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.277019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.277165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.277210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.277406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.277440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.277648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.277683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.277817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.277858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.278050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.278086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.278224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.278265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.278402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.278447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.278657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.278692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.278889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.278923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.279129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.279165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.279362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.279396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.279587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.279620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.279838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.279873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.279938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5acaf0 (9): Bad file descriptor 00:32:47.676 [2024-11-20 14:51:59.280154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.280225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.280385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.280422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.280743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.280776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.280966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.281003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.281178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.281210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.281389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.281424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.281550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.281582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.281795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.281825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.281964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.281998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.282115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.282148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.282281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.282312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.282426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.282459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.282683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.282728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.282871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.282898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.283032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.283060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.283259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.283284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.283390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.676 [2024-11-20 14:51:59.283416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.676 qpair failed and we were unable to recover it. 00:32:47.676 [2024-11-20 14:51:59.283531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.283557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.283670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.283694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.283779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.283803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.283889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.283913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.284011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.284038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.284270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.284295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.284407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.284431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.284592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.284617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.284790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.284815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.284927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.284969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.285095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.285119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.285207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.285231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.285325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.285350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.285459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.285484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.285572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.285596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.285759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.285783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.285936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.285984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.286109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.286135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.286304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.286329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.286439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.286464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.286562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.286586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.286773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.286798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.286968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.286995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.287103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.287128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.287345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.287370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.287477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.287502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.287682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.287707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.287791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.287815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.287920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.287945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.288071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.288096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.288259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.288283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.288440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.288465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.288568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.288593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.288695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.288719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.288896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.288920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.289106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.289133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.289328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.289354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.289456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.289481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.289643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.289667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.289769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.289794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.289898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.289923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.290117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.290143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.290298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.677 [2024-11-20 14:51:59.290323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.677 qpair failed and we were unable to recover it. 00:32:47.677 [2024-11-20 14:51:59.290429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.290453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.290699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.290723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.290833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.290857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.290976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.291010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.291187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.291219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.291338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.291370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.291477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.291515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.291688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.291719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.291920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.291963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.292143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.292175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.292346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.292377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.292575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.292606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.292727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.292758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.292859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.292891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.293136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.293170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.293354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.293386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.293506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.293539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.293780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.293812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.294032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.294066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.294252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.294284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.678 qpair failed and we were unable to recover it. 00:32:47.678 [2024-11-20 14:51:59.294395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.678 [2024-11-20 14:51:59.294426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.294616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.294647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.294818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.294850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.295062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.295096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.295269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.295301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.295507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.295539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.295662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.295693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.295862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.295893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.296018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.296052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.296169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.296200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.296329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.296361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.296478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.296510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.296646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.296677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.296814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.296846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.296964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.296997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.297182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.297214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.297402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.297434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.297548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.297579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.297692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.297724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.297867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.297898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.298021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.298053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.298220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.298252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.298490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.298521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.298626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.679 [2024-11-20 14:51:59.298656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.679 qpair failed and we were unable to recover it. 00:32:47.679 [2024-11-20 14:51:59.298840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.298872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.299080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.299113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.299263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.299299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.299398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.299430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.299565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.299596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.299782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.299814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.300084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.300120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.300233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.300265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.300397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.300428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.300670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.300702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.300821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.300851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.300971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.301004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.301105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.301136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.301262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.301293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.301414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.301446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.301553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.301585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.301754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.301786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.301908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.301938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.302128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.302159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.302268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.302299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.302413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.302445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.302618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.302648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.302756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.302787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.680 qpair failed and we were unable to recover it. 00:32:47.680 [2024-11-20 14:51:59.302975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.680 [2024-11-20 14:51:59.303009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.681 qpair failed and we were unable to recover it. 00:32:47.681 [2024-11-20 14:51:59.303222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.681 [2024-11-20 14:51:59.303252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.681 qpair failed and we were unable to recover it. 00:32:47.681 [2024-11-20 14:51:59.303399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.681 [2024-11-20 14:51:59.303430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.681 qpair failed and we were unable to recover it. 00:32:47.681 [2024-11-20 14:51:59.303533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.681 [2024-11-20 14:51:59.303565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.681 qpair failed and we were unable to recover it. 00:32:47.681 [2024-11-20 14:51:59.303680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.681 [2024-11-20 14:51:59.303710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.681 qpair failed and we were unable to recover it. 00:32:47.681 [2024-11-20 14:51:59.303886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.681 [2024-11-20 14:51:59.303918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.681 qpair failed and we were unable to recover it. 00:32:47.681 [2024-11-20 14:51:59.304114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.681 [2024-11-20 14:51:59.304146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.681 qpair failed and we were unable to recover it. 00:32:47.681 [2024-11-20 14:51:59.304264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.681 [2024-11-20 14:51:59.304294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.681 qpair failed and we were unable to recover it. 00:32:47.681 [2024-11-20 14:51:59.304413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.681 [2024-11-20 14:51:59.304445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.681 qpair failed and we were unable to recover it. 00:32:47.681 [2024-11-20 14:51:59.304560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.681 [2024-11-20 14:51:59.304590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.681 qpair failed and we were unable to recover it. 00:32:47.681 [2024-11-20 14:51:59.304724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.681 [2024-11-20 14:51:59.304754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.681 qpair failed and we were unable to recover it. 00:32:47.681 [2024-11-20 14:51:59.304964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.681 [2024-11-20 14:51:59.304997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.681 qpair failed and we were unable to recover it. 00:32:47.681 [2024-11-20 14:51:59.305263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.681 [2024-11-20 14:51:59.305294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.681 qpair failed and we were unable to recover it. 00:32:47.681 [2024-11-20 14:51:59.305411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.305441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.305674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.305706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.305827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.305859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.305978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.306011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.306183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.306215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.306334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.306366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.306555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.306593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.306695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.306726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.306911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.306942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.307085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.307117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.307302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.307333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.307504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.307535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.307641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.307672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.307788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.307819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.308008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.308042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.308285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.308317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.308438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.308469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.308708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.308740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.308910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.308941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.309126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.682 [2024-11-20 14:51:59.309159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.682 qpair failed and we were unable to recover it. 00:32:47.682 [2024-11-20 14:51:59.309280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.309312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.309425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.309456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.309650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.309682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.309785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.309815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.309945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.309987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.310110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.310141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.310263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.310293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.310507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.310538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.310723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.310754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.310925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.310963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.311087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.311118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.311324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.311355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.311495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.311527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.311642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.311675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.311995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.312028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.312218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.312250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.312485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.312516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.312618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.312655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.312842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.312873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.313059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.313092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.313294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.313325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.313459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.313490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.313752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.313784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.313909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.313940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.683 [2024-11-20 14:51:59.314069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.683 [2024-11-20 14:51:59.314100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.683 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.314338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.314369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.314503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.314545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.314748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.314778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.314893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.314924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.315114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.315145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.315329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.315359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.315548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.315579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.315778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.315809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.315922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.315994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.316101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.316132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.316304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.316335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.316461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.316491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.316611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.316642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.316832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.316863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.316996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.317029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.317214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.317245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.317424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.317454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.317639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.317670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.317860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.317890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.318076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.318108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.318235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.318267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.318384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.318414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.318607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.318638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.318751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.318782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.318894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.318925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.319135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.319166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.319288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.319318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.319492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.319523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.319639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.319670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.319774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.319805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.320042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.320073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.320177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.320208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.320394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.320426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.320615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.320646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.320915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.320946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.321148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.321179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.321393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.321424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.321597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.321627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.321752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.321784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.321915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.321946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.322095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.322126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.322310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.322346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.322463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.322493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.322710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.322741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.322864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.684 [2024-11-20 14:51:59.322895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.684 qpair failed and we were unable to recover it. 00:32:47.684 [2024-11-20 14:51:59.323030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.323063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.323186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.323216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.323415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.323446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.323635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.323665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.323857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.323888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.324018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.324050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.324173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.324203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.324385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.324416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.324596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.324627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.324747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.324778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.324905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.324935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.325052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.325083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.325260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.325290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.325403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.325433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.325603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.325635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.325741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.325772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.325874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.325905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.326123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.326155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.326289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.326319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.326498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.326529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.326767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.326798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.326906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.326937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.327063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.327095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.327272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.327343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.327560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.327596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.327717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.327749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.327858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.327889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.328083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.328117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.328312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.328342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.328593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.328624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.328751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.328782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.328911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.328942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.329137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.329168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.329384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.329416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.329517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.329549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.329731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.329762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.329884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.329914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.330129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.330162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.330266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.330297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.330405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.330436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.330625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.330657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.330770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.330801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.330912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.330945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.331154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.331186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.331304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.331335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.331459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.685 [2024-11-20 14:51:59.331490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.685 qpair failed and we were unable to recover it. 00:32:47.685 [2024-11-20 14:51:59.331670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.331702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.331913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.331944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.332130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.332162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.332283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.332314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.332419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.332454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.332619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.332650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.332926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.332969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.333076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.333107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.333230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.333261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.333376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.333406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.333575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.333606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.333800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.333832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.334028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.334060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.334174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.334206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.334326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.334357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.334542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.334573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.334746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.334778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.334966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.335000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.335188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.335219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.335332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.335362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.335462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.335494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.335667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.335699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.335874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.335905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.336093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.336125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.336260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.336291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.336485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.336516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.336707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.336741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.336916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.336957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.337069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.337101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.337227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.337258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.337397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.337428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.337607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.337644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.337765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.337797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.337913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.337945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.338142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.338174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.338304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.338334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.338522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.338553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.338661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.338690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.686 qpair failed and we were unable to recover it. 00:32:47.686 [2024-11-20 14:51:59.338861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.686 [2024-11-20 14:51:59.338891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.339087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.339121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.339295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.339326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.339430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.339462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.339568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.339601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.339714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.339745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.339861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.339893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.340147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.340225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.340412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.340484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.340623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.340659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.340906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.340939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.341086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.341118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.341238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.341270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.341445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.341477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.341600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.341633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.341804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.341836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.342023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.342057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.342171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.342203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.342403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.342435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.342668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.342700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.342871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.342913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.343116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.343149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.343353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.343389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.343527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.343562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.343674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.343705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.343908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.343943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.344078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.344110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.344237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.344269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.344392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.344423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.344612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.344643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.344817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.344849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.345096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.345130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.345334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.345365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.345537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.345569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.345751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.345784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.346108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.346140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.346271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.346303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.346419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.346451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.346690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.346721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.346895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.346925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.347108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.347141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.347328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.347359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.347569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.347601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.347862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.347894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.348120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.348152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.348277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.348308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.687 qpair failed and we were unable to recover it. 00:32:47.687 [2024-11-20 14:51:59.348412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.687 [2024-11-20 14:51:59.348443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.348628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.348658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.348785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.348818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.349017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.349050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.349166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.349198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.349299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.349331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.349511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.349543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.349683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.349716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.349888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.349919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.350171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.350203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.350407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.350439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.350554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.350586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.350800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.350832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.350987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.351020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.351208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.351241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.351362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.351399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.351518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.351550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.351727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.351759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.351881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.351912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.352163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.352195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.352379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.352410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.352540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.352572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.352740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.352771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.352885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.352916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.353162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.353205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.353323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.353356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.353598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.353630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.353831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.353862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.353985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.354017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.354200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.354233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.354348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.354380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.354561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.354594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.354770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.354802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.354990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.355023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.355221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.355254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.355390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.355421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.355528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.355558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.355804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.355836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.355966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.355999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.356170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.356201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.356320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.356352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.356616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.356647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.356771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.356806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.356933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.356986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.357122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.357153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.357263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.688 [2024-11-20 14:51:59.357293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.688 qpair failed and we were unable to recover it. 00:32:47.688 [2024-11-20 14:51:59.357539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.357570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.357710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.357741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.357862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.357893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.358098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.358132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.358347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.358378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.358557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.358589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.358714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.358744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.358987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.359020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.359159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.359191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.359364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.359395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.359584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.359616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.359788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.359820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.359938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.359981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.360177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.360208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.360443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.360475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.360658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.360690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.360867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.360898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.361162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.361195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.361324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.361356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.361544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.361575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.361694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.361725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.361917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.361959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.362080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.362111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.362284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.362322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.362508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.362544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.362714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.362746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.362933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.362976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.363170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.363202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.363336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.363368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.363558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.363590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.363690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.363721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.363828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.363860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.363989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.364022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.364146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.364178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.364285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.364315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.364425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.364457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.364574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.364605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.364755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.364787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.364895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.364926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.365123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.365154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.365293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.365323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.365560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.365592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.365717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.365748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.365929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.365970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.366091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.366123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.689 qpair failed and we were unable to recover it. 00:32:47.689 [2024-11-20 14:51:59.366362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.689 [2024-11-20 14:51:59.366394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.366566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.366596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.366795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.366827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.366943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.366984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.367106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.367136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.367257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.367294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.367410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.367443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.367669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.367701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.367808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.367840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.368016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.368049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.368239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.368271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.368384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.368416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.368531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.368563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.368758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.368789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.368973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.369005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.369140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.369173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.369368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.369398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.369542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.369573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.369816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.369847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.370043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.370078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.370346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.370378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.370556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.370587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.370712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.370743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.370914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.370945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.371192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.371224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.371462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.371493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.371623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.371656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.371833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.371863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.371999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.372031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.372142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.372174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.372298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.372329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.372457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.372488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.372731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.372762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.372887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.372920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.373060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.373091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.373201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.373232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.373342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.373375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.373570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.373601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.373729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.373760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.373868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.373900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.374124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.690 [2024-11-20 14:51:59.374157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.690 qpair failed and we were unable to recover it. 00:32:47.690 [2024-11-20 14:51:59.374283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.374314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.374445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.374476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.374599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.374631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.374831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.374862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.375099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.375131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.375252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.375290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.375468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.375498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.375600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.375631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.375873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.375905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.376094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.376126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.376309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.376341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.376469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.376500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.376616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.376648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.376758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.376789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.376979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.377012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.377206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.377238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.377407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.377438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.377678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.377710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.377827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.377859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.377989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.378023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.378154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.378185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.378366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.378397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.378516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.378549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.378735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.378765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.378873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.378904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.379038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.379070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.379172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.379203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.379305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.379336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.379518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.379549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.379674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.379705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.379840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.379872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.379988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.380019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.380200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.380232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.380418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.380451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.380571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.380602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.380733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.380764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.380866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.380897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.381029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.381061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.381180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.381211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.381384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.381414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.381529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.381559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.381737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.381769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.381893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.381925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.382053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.382084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.382259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.382290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.382419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.382451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.382622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.382693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.691 qpair failed and we were unable to recover it. 00:32:47.691 [2024-11-20 14:51:59.382983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.691 [2024-11-20 14:51:59.383021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.383212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.383244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.383419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.383450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.383586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.383616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.383862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.383893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.384096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.384129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.384301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.384331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.384609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.384640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.384844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.384875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.385044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.385076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.385280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.385311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.385433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.385465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.385646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.385686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.385809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.385840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.386024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.386057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.386193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.386224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.386325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.386356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.386547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.386578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.386685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.386715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.386930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.386975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.387083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.387114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.387293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.387324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.387508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.387539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.387753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.387784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.387972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.388006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.388142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.388174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.388303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.388333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.388513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.388544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.388719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.388749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.388864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.388895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.389081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.389113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.389242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.389273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.389471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.389501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.389695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.389726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.389840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.389871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.390108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.390140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.390252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.390282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.390453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.390485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.390663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.390693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.390813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.390845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.390964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.390997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.391188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.391218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.391331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.391361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.692 [2024-11-20 14:51:59.391534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.692 [2024-11-20 14:51:59.391564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.692 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.391676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.391706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.391905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.391936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.392053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.392084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.392214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.392245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.392375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.392406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.392525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.392556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.392658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.392689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.392926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.392967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.393162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.393198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.393441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.393473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.393712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.393742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.393935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.393979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.394184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.394215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.394399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.394431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.394552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.394583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.394693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.394724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.394838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.394869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.394983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.395016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.395129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.395160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.395327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.395359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.395535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.395565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.395674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.395706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.395849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.395880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.396001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.396038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.396210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.396282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.396477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.396513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.396619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.396651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.396843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.396875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.397054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.397088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.397195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.397228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.397345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.397377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.397642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.397676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.397901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.397933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.398200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.398232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.398400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.398433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.398606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.398645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.398780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.398811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.398924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.398968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.399168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.399199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.399372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.399402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.399529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.399561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.399680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.399711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.399828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.693 [2024-11-20 14:51:59.399858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.693 qpair failed and we were unable to recover it. 00:32:47.693 [2024-11-20 14:51:59.400049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.400081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.400210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.400242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.400422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.400453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.400566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.400598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.400712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.400743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.400933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.400976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.401176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.401208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.401381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.401413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.401597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.401628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.401751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.401782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.401973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.402004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.402209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.402241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.402432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.402463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.402644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.402674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.402864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.402896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.403109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.403141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.403316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.403348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.403519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.403550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.403680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.403711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.403824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.403855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.403975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.404008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.404223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.404255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.404502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.404532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.404640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.404671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.404934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.404979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.405112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.405143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.405315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.405347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.405581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.405612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.405743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.405775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.405985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.406018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.406149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.406181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.406302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.406333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.406435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.406466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.406706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.406743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.406921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.406963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.407093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.407123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.407353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.407385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.407524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.407554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.407666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.407698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.407970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.694 [2024-11-20 14:51:59.408003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.694 qpair failed and we were unable to recover it. 00:32:47.694 [2024-11-20 14:51:59.408127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.408159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.408364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.408397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.408572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.408603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.408786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.408819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.409064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.409102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.409285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.409317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.409435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.409465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.409578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.409609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.409797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.409829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.410006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.410039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.410156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.410188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.410369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.410400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.410568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.410600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.410783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.410814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.410930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.410971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.411113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.411145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.411331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.411363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.411478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.411509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.411645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.411677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.411792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.411823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.411937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.411983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.412154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.412185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.412318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.412349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.412455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.412487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.412610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.412641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.412745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.412776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.412884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.412915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.413035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.413066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.413247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.413278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.413471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.413502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.413609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.413641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.413813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.413845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.413989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.414023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.414156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.414189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.414410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.414481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.414616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.414651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.414892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.414923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.415051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.415083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.415210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.415241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.415482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.415513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.415650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.415682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.415872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.415903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.416041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.416074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.416203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.416234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.416354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.416384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.695 [2024-11-20 14:51:59.416509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.695 [2024-11-20 14:51:59.416540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.695 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.416721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.416753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.417018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.417060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.417189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.417221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.417403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.417435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.417626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.417657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.417893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.417924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.418117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.418148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.418341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.418373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.418506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.418537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.418661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.418692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.418872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.418903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.419020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.419052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.419235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.419265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.419387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.419417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.419643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.419673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.419871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.419903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.420107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.420140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.420268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.420298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.420539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.420569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.420769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.420801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.420923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.420965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.421204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.421235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.421420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.421451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.421557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.421588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.421759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.421790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.421912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.421943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.422159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.422191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.422380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.422411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.422603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.422635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.422881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.422912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.423162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.423195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.423291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.423321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.423412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.423443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.423720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.423751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.423939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.423983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.424177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.424208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.424342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.424373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.424560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.424591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.424771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.424802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.424913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.424943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.425172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.425204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.425376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.425413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.425599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.425629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.425753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.425783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.425965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.425998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.426207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.696 [2024-11-20 14:51:59.426238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.696 qpair failed and we were unable to recover it. 00:32:47.696 [2024-11-20 14:51:59.426359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.426390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.426565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.426596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.426722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.426753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.427027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.427060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.427186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.427217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.427388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.427419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.427594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.427624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.427811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.427841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.428045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.428080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.428294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.428326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.428594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.428625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.428748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.428779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.428964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.428997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.429113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.429144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.429258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.429290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.429472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.429503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.429620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.429653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.429765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.429795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.430053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.430086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.430217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.430248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.430374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.430405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.430648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.430680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.430903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.430967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.431085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.431117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.431295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.431327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.431503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.431535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.431645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.431676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.431781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.431812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.431931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.431972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.432155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.432187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.432356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.432387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.432570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.432602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.432786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.432818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.432982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.433015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.433146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.433178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.433386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.433417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.433695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.433726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.433918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.433956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.434228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.434262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.434370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.434401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.434519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.434551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.434811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.434843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.434944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.697 [2024-11-20 14:51:59.434985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.697 qpair failed and we were unable to recover it. 00:32:47.697 [2024-11-20 14:51:59.435284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.435316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.435490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.435522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.435732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.435763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.435935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.435977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.436173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.436205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.436418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.436450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.436655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.436687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.436873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.436906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.437051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.437084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.437288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.437318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.437435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.437466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.437583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.437615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.437784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.437815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.437996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.438029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.438289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.438321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.438563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.438595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.438790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.438822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.439009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.439042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.439160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.439191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.439375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.439406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.439617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.439654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.439762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.439793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.439979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.440011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.440185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.440217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.440330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.440361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.440533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.440565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.440669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.440700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.440874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.440905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.441032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.441064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.441196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.441228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.441357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.441389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.441499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.441530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.441730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.441761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.441901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.441933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.442088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.442120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.698 qpair failed and we were unable to recover it. 00:32:47.698 [2024-11-20 14:51:59.442247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.698 [2024-11-20 14:51:59.442279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.442447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.442479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.442603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.442634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.442823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.442854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.443048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.443081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.443196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.443228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.443332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.443363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.443558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.443590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.443772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.443804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.443906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.443937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.444087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.444120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.444251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.444282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.444518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.444555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.444669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.444701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.444814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.444845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.445024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.445057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.445241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.445273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.445447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.445479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.445700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.445732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.445851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.445882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.446085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.446118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.446357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.446389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.446510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.446541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.446669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.446701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.446823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.446854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.446987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.447021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.447200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.447271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.447495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.447531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.447732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.447764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.447899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.447930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.448065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.448097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.448216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.448248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.448356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.448387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.448560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.448590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.448785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.448817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.449010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.449042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.449290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.449322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.449562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.449594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.449706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.449737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.449854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.449895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.450028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.450060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.450241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.450272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.450390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.450422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.450618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.450649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.450930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.450971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.699 [2024-11-20 14:51:59.451104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.699 [2024-11-20 14:51:59.451135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.699 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.451251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.451282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.451411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.451443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.451612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.451644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.451821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.451851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.451985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.452018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.452135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.452167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.452370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.452400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.452644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.452677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.452847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.452879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.453095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.453127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.453249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.453280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.453397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.453429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.453718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.453749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.453918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.453957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.454196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.454228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.454347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.454378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.454635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.454666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.454787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.454817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.455002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.455035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.455237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.455269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.455397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.455429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.455546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.455578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.455684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.455716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.455831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.455863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.456058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.456091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.456191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.456222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.456400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.456432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.456649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.456680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.456855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.456886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.457075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.457106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.457284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.457314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.457514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.457545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.457734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.457765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.457878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.457921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.458055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.458086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.458264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.458295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.458494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.458526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.458716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.458747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.458862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.458894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.459081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.459113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.459283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.459315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.459559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.459590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.459789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.459821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.459933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.700 [2024-11-20 14:51:59.459975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.700 qpair failed and we were unable to recover it. 00:32:47.700 [2024-11-20 14:51:59.460073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.460103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.460241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.460273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.460482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.460514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.460726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.460757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.460879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.460910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.461023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.461055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.461168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.461199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.461328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.461359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.461479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.461510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.461695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.461727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.461914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.461945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.462148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.462180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.462288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.462318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.462439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.462470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.462588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.462619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.462817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.462848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.463028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.463062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.463277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.463308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.463426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.463457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.463569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.463600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.463812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.463843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.463944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.463985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.464100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.464131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.464322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.464353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.464551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.464582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.464710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.464742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.465004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.465037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.465210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.465241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.465418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.465449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.465620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.465657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.465828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.465859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.466053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.466087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.466283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.466314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.466449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.466480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.466668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.466700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.466812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.466842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.466971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.467004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.467120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.467151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.467271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.467302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.467484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.467515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.467629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.467660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.467829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.467860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.467983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.468015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.468139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.468171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.701 qpair failed and we were unable to recover it. 00:32:47.701 [2024-11-20 14:51:59.468303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.701 [2024-11-20 14:51:59.468334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.468448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.468479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.468655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.468686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.468864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.468896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.469037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.469069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.469247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.469278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.469402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.469434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.469568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.469600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.469709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.469741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.469931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.469970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.470258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.470289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.470409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.470440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.470608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.470676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.470831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.470866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.471081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.471115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.471309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.471340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.471445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.471476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.471657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.471688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.471925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.471967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.472112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.472144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.472337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.472368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.472481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.472516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.472638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.472669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.472790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.472822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.473012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.473043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.473170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.473203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.473391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.473423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.473716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.473748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.473870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.473901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.474151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.474183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.474373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.474405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.474774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.474806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.475003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.475035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.475245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.475277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.702 qpair failed and we were unable to recover it. 00:32:47.702 [2024-11-20 14:51:59.475502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.702 [2024-11-20 14:51:59.475533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.475730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.475762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.475959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.475991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.476234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.476266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.476508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.476541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.476766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.476802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.476998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.477030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.477271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.477302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.477492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.477524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.477660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.477691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.477884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.477915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.478069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.478101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.478308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.478339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.478603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.478634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.478746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.478778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.478990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.479023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.479162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.479194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.479396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.479426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.479630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.479662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.479940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.479982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.480109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.480140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.480318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.480349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.480589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.480621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.480746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.480777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.480955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.480988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.481127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.481159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.481331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.481362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.481624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.481655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.481861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.481893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.482070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.482102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.482279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.482310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.482598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.482630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.482757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.482788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.482967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.483000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.483143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.483176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.483382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.483413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.483620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.483651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.483906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.483937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.484166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.484198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.484462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.484493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.484606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.703 [2024-11-20 14:51:59.484637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.703 qpair failed and we were unable to recover it. 00:32:47.703 [2024-11-20 14:51:59.484899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.484931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.485159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.485191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.485369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.485401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.485547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.485577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.485719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.485750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.485944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.486028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.486259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.486294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.486479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.486511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.486700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.486731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.486910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.486941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.487134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.487164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.487346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.487378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.487557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.487588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.487867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.487899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.488181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.488214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.488417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.488448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.488686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.488718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.488929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.488974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.489173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.489209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.489401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.489433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.489627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.489658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.489787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.489819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.490077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.490110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.490306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.490338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.490527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.490559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.490746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.490777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.490965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.490997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.491255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.491287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.491423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.491454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.491658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.491689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.491804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.491835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.492127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.492160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.492370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.492401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.492684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.492715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.492965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.492997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.493184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.493215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.493473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.493504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.493774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.493805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.493994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.494028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.494264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.494295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.494477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.704 [2024-11-20 14:51:59.494508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.704 qpair failed and we were unable to recover it. 00:32:47.704 [2024-11-20 14:51:59.494708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.705 [2024-11-20 14:51:59.494739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.705 qpair failed and we were unable to recover it. 00:32:47.705 [2024-11-20 14:51:59.495003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.705 [2024-11-20 14:51:59.495035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.705 qpair failed and we were unable to recover it. 00:32:47.705 [2024-11-20 14:51:59.495309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.705 [2024-11-20 14:51:59.495341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.705 qpair failed and we were unable to recover it. 00:32:47.705 [2024-11-20 14:51:59.495472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.705 [2024-11-20 14:51:59.495503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.705 qpair failed and we were unable to recover it. 00:32:47.705 [2024-11-20 14:51:59.495811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.705 [2024-11-20 14:51:59.495851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.705 qpair failed and we were unable to recover it. 00:32:47.705 [2024-11-20 14:51:59.496094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.705 [2024-11-20 14:51:59.496128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.705 qpair failed and we were unable to recover it. 00:32:47.705 [2024-11-20 14:51:59.496392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.705 [2024-11-20 14:51:59.496426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.705 qpair failed and we were unable to recover it. 00:32:47.705 [2024-11-20 14:51:59.496639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.705 [2024-11-20 14:51:59.496670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.705 qpair failed and we were unable to recover it. 00:32:47.705 [2024-11-20 14:51:59.496931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.705 [2024-11-20 14:51:59.496975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.705 qpair failed and we were unable to recover it. 00:32:47.705 [2024-11-20 14:51:59.497130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.705 [2024-11-20 14:51:59.497163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.705 qpair failed and we were unable to recover it. 00:32:47.705 [2024-11-20 14:51:59.497399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.705 [2024-11-20 14:51:59.497431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.705 qpair failed and we were unable to recover it. 00:32:47.705 [2024-11-20 14:51:59.497637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.705 [2024-11-20 14:51:59.497670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.705 qpair failed and we were unable to recover it. 00:32:47.705 [2024-11-20 14:51:59.497932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.705 [2024-11-20 14:51:59.497975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.705 qpair failed and we were unable to recover it. 00:32:47.705 [2024-11-20 14:51:59.498239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.705 [2024-11-20 14:51:59.498271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.705 qpair failed and we were unable to recover it. 00:32:47.705 [2024-11-20 14:51:59.498468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.705 [2024-11-20 14:51:59.498500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.705 qpair failed and we were unable to recover it. 00:32:47.705 [2024-11-20 14:51:59.498757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.705 [2024-11-20 14:51:59.498789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.705 qpair failed and we were unable to recover it. 00:32:47.705 [2024-11-20 14:51:59.499097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.705 [2024-11-20 14:51:59.499130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.705 qpair failed and we were unable to recover it. 00:32:47.705 [2024-11-20 14:51:59.499323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.705 [2024-11-20 14:51:59.499355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.705 qpair failed and we were unable to recover it. 00:32:47.705 [2024-11-20 14:51:59.499594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.705 [2024-11-20 14:51:59.499626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.705 qpair failed and we were unable to recover it. 00:32:47.705 [2024-11-20 14:51:59.499886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.706 [2024-11-20 14:51:59.499918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.706 qpair failed and we were unable to recover it. 00:32:47.706 [2024-11-20 14:51:59.500144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.706 [2024-11-20 14:51:59.500212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:47.706 qpair failed and we were unable to recover it. 00:32:47.706 [2024-11-20 14:51:59.500423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.706 [2024-11-20 14:51:59.500458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.706 qpair failed and we were unable to recover it. 00:32:47.706 [2024-11-20 14:51:59.500764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.706 [2024-11-20 14:51:59.500796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.706 qpair failed and we were unable to recover it. 00:32:47.706 [2024-11-20 14:51:59.501034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.706 [2024-11-20 14:51:59.501067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.706 qpair failed and we were unable to recover it. 00:32:47.706 [2024-11-20 14:51:59.501263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.706 [2024-11-20 14:51:59.501293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.706 qpair failed and we were unable to recover it. 00:32:47.706 [2024-11-20 14:51:59.501420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.706 [2024-11-20 14:51:59.501450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.706 qpair failed and we were unable to recover it. 00:32:47.706 [2024-11-20 14:51:59.501657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.706 [2024-11-20 14:51:59.501689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.706 qpair failed and we were unable to recover it. 00:32:47.706 [2024-11-20 14:51:59.501896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.706 [2024-11-20 14:51:59.501927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.706 qpair failed and we were unable to recover it. 00:32:47.706 [2024-11-20 14:51:59.502180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.706 [2024-11-20 14:51:59.502215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.706 qpair failed and we were unable to recover it. 00:32:47.706 [2024-11-20 14:51:59.502412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.706 [2024-11-20 14:51:59.502443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.706 qpair failed and we were unable to recover it. 00:32:47.706 [2024-11-20 14:51:59.502736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.706 [2024-11-20 14:51:59.502768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.706 qpair failed and we were unable to recover it. 00:32:47.706 [2024-11-20 14:51:59.503073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-11-20 14:51:59.503112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.707 qpair failed and we were unable to recover it. 00:32:47.707 [2024-11-20 14:51:59.503233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-11-20 14:51:59.503269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.707 qpair failed and we were unable to recover it. 00:32:47.707 [2024-11-20 14:51:59.503522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-11-20 14:51:59.503554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.707 qpair failed and we were unable to recover it. 00:32:47.707 [2024-11-20 14:51:59.503820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-11-20 14:51:59.503852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.707 qpair failed and we were unable to recover it. 00:32:47.707 [2024-11-20 14:51:59.504033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-11-20 14:51:59.504066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.707 qpair failed and we were unable to recover it. 00:32:47.707 [2024-11-20 14:51:59.504271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-11-20 14:51:59.504303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.707 qpair failed and we were unable to recover it. 00:32:47.707 [2024-11-20 14:51:59.504524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-11-20 14:51:59.504556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.707 qpair failed and we were unable to recover it. 00:32:47.707 [2024-11-20 14:51:59.504803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-11-20 14:51:59.504834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.707 qpair failed and we were unable to recover it. 00:32:47.707 [2024-11-20 14:51:59.505087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-11-20 14:51:59.505119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.707 qpair failed and we were unable to recover it. 00:32:47.707 [2024-11-20 14:51:59.505254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-11-20 14:51:59.505287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.707 qpair failed and we were unable to recover it. 00:32:47.707 [2024-11-20 14:51:59.505466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-11-20 14:51:59.505497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.707 qpair failed and we were unable to recover it. 00:32:47.707 [2024-11-20 14:51:59.505744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.707 [2024-11-20 14:51:59.505775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.707 qpair failed and we were unable to recover it. 00:32:47.707 [2024-11-20 14:51:59.505991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.708 [2024-11-20 14:51:59.506024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.708 qpair failed and we were unable to recover it. 00:32:47.708 [2024-11-20 14:51:59.506153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.708 [2024-11-20 14:51:59.506191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.708 qpair failed and we were unable to recover it. 00:32:47.708 [2024-11-20 14:51:59.506429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.708 [2024-11-20 14:51:59.506461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.708 qpair failed and we were unable to recover it. 00:32:47.708 [2024-11-20 14:51:59.506675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.708 [2024-11-20 14:51:59.506707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.708 qpair failed and we were unable to recover it. 00:32:47.708 [2024-11-20 14:51:59.506898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.708 [2024-11-20 14:51:59.506930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.708 qpair failed and we were unable to recover it. 00:32:47.708 [2024-11-20 14:51:59.507135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.708 [2024-11-20 14:51:59.507167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.708 qpair failed and we were unable to recover it. 00:32:47.708 [2024-11-20 14:51:59.507312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.708 [2024-11-20 14:51:59.507344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.708 qpair failed and we were unable to recover it. 00:32:47.708 [2024-11-20 14:51:59.507476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.708 [2024-11-20 14:51:59.507507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.708 qpair failed and we were unable to recover it. 00:32:47.708 [2024-11-20 14:51:59.507827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.708 [2024-11-20 14:51:59.507859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.708 qpair failed and we were unable to recover it. 00:32:47.709 [2024-11-20 14:51:59.508004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.709 [2024-11-20 14:51:59.508038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.709 qpair failed and we were unable to recover it. 00:32:47.709 [2024-11-20 14:51:59.508221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.709 [2024-11-20 14:51:59.508253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.709 qpair failed and we were unable to recover it. 00:32:47.709 [2024-11-20 14:51:59.508430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.709 [2024-11-20 14:51:59.508462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.709 qpair failed and we were unable to recover it. 00:32:47.709 [2024-11-20 14:51:59.508684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.709 [2024-11-20 14:51:59.508715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.709 qpair failed and we were unable to recover it. 00:32:47.709 [2024-11-20 14:51:59.508916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.709 [2024-11-20 14:51:59.508957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.709 qpair failed and we were unable to recover it. 00:32:47.709 [2024-11-20 14:51:59.509150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.709 [2024-11-20 14:51:59.509182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.709 qpair failed and we were unable to recover it. 00:32:47.709 [2024-11-20 14:51:59.509322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.709 [2024-11-20 14:51:59.509354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.709 qpair failed and we were unable to recover it. 00:32:47.709 [2024-11-20 14:51:59.509619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.710 [2024-11-20 14:51:59.509650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.710 qpair failed and we were unable to recover it. 00:32:47.710 [2024-11-20 14:51:59.509911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.710 [2024-11-20 14:51:59.509944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.710 qpair failed and we were unable to recover it. 00:32:47.710 [2024-11-20 14:51:59.510215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.710 [2024-11-20 14:51:59.510246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.710 qpair failed and we were unable to recover it. 00:32:47.710 [2024-11-20 14:51:59.510383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.711 [2024-11-20 14:51:59.510414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.711 qpair failed and we were unable to recover it. 00:32:47.711 [2024-11-20 14:51:59.510667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.711 [2024-11-20 14:51:59.510700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.711 qpair failed and we were unable to recover it. 00:32:47.711 [2024-11-20 14:51:59.510896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.711 [2024-11-20 14:51:59.510928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.711 qpair failed and we were unable to recover it. 00:32:47.711 [2024-11-20 14:51:59.511127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.711 [2024-11-20 14:51:59.511159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.711 qpair failed and we were unable to recover it. 00:32:47.711 [2024-11-20 14:51:59.511396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.712 [2024-11-20 14:51:59.511428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.712 qpair failed and we were unable to recover it. 00:32:47.712 [2024-11-20 14:51:59.511735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.712 [2024-11-20 14:51:59.511767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.712 qpair failed and we were unable to recover it. 00:32:47.712 [2024-11-20 14:51:59.511979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.712 [2024-11-20 14:51:59.512012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.712 qpair failed and we were unable to recover it. 00:32:47.712 [2024-11-20 14:51:59.512147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.712 [2024-11-20 14:51:59.512178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.712 qpair failed and we were unable to recover it. 00:32:47.712 [2024-11-20 14:51:59.512390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.712 [2024-11-20 14:51:59.512422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:47.712 qpair failed and we were unable to recover it. 00:32:47.712 [2024-11-20 14:51:59.512657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.712 [2024-11-20 14:51:59.512693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.712 qpair failed and we were unable to recover it. 00:32:47.712 [2024-11-20 14:51:59.512908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.712 [2024-11-20 14:51:59.512939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.712 qpair failed and we were unable to recover it. 00:32:47.712 [2024-11-20 14:51:59.513141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.712 [2024-11-20 14:51:59.513174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.712 qpair failed and we were unable to recover it. 00:32:47.712 [2024-11-20 14:51:59.513353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.712 [2024-11-20 14:51:59.513384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.712 qpair failed and we were unable to recover it. 00:32:47.712 [2024-11-20 14:51:59.513519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.712 [2024-11-20 14:51:59.513551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.712 qpair failed and we were unable to recover it. 00:32:47.712 [2024-11-20 14:51:59.513812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.713 [2024-11-20 14:51:59.513844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.713 qpair failed and we were unable to recover it. 00:32:47.713 [2024-11-20 14:51:59.514109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.713 [2024-11-20 14:51:59.514141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.713 qpair failed and we were unable to recover it. 00:32:47.713 [2024-11-20 14:51:59.514352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.713 [2024-11-20 14:51:59.514383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.713 qpair failed and we were unable to recover it. 00:32:47.713 [2024-11-20 14:51:59.514576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.713 [2024-11-20 14:51:59.514608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.713 qpair failed and we were unable to recover it. 00:32:47.713 [2024-11-20 14:51:59.514869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.713 [2024-11-20 14:51:59.514900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.713 qpair failed and we were unable to recover it. 00:32:47.713 [2024-11-20 14:51:59.515110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.713 [2024-11-20 14:51:59.515143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.713 qpair failed and we were unable to recover it. 00:32:47.713 [2024-11-20 14:51:59.515281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.713 [2024-11-20 14:51:59.515313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.713 qpair failed and we were unable to recover it. 00:32:47.713 [2024-11-20 14:51:59.515536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.714 [2024-11-20 14:51:59.515568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.714 qpair failed and we were unable to recover it. 00:32:47.714 [2024-11-20 14:51:59.515740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.714 [2024-11-20 14:51:59.515778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.714 qpair failed and we were unable to recover it. 00:32:47.714 [2024-11-20 14:51:59.515913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.714 [2024-11-20 14:51:59.515944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.714 qpair failed and we were unable to recover it. 00:32:47.714 [2024-11-20 14:51:59.516253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.714 [2024-11-20 14:51:59.516286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.714 qpair failed and we were unable to recover it. 00:32:47.714 [2024-11-20 14:51:59.516535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.714 [2024-11-20 14:51:59.516567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.714 qpair failed and we were unable to recover it. 00:32:47.714 [2024-11-20 14:51:59.516716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.714 [2024-11-20 14:51:59.516748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.714 qpair failed and we were unable to recover it. 00:32:47.714 [2024-11-20 14:51:59.516936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.714 [2024-11-20 14:51:59.516976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.714 qpair failed and we were unable to recover it. 00:32:47.714 [2024-11-20 14:51:59.517186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.714 [2024-11-20 14:51:59.517217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.714 qpair failed and we were unable to recover it. 00:32:47.714 [2024-11-20 14:51:59.517362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.714 [2024-11-20 14:51:59.517394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.714 qpair failed and we were unable to recover it. 00:32:47.714 [2024-11-20 14:51:59.517578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.715 [2024-11-20 14:51:59.517609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.715 qpair failed and we were unable to recover it. 00:32:47.715 [2024-11-20 14:51:59.517817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.715 [2024-11-20 14:51:59.517848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.715 qpair failed and we were unable to recover it. 00:32:47.715 [2024-11-20 14:51:59.518115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.715 [2024-11-20 14:51:59.518149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.715 qpair failed and we were unable to recover it. 00:32:47.715 [2024-11-20 14:51:59.518352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.715 [2024-11-20 14:51:59.518384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.715 qpair failed and we were unable to recover it. 00:32:47.715 [2024-11-20 14:51:59.518588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.715 [2024-11-20 14:51:59.518619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.715 qpair failed and we were unable to recover it. 00:32:47.716 [2024-11-20 14:51:59.518819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.716 [2024-11-20 14:51:59.518850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.716 qpair failed and we were unable to recover it. 00:32:47.716 [2024-11-20 14:51:59.519031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.716 [2024-11-20 14:51:59.519064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.716 qpair failed and we were unable to recover it. 00:32:47.716 [2024-11-20 14:51:59.519327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.716 [2024-11-20 14:51:59.519360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.716 qpair failed and we were unable to recover it. 00:32:47.716 [2024-11-20 14:51:59.519557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.716 [2024-11-20 14:51:59.519588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.716 qpair failed and we were unable to recover it. 00:32:47.716 [2024-11-20 14:51:59.519711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.716 [2024-11-20 14:51:59.519743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.716 qpair failed and we were unable to recover it. 00:32:47.716 [2024-11-20 14:51:59.519987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.716 [2024-11-20 14:51:59.520020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.716 qpair failed and we were unable to recover it. 00:32:47.716 [2024-11-20 14:51:59.520157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.716 [2024-11-20 14:51:59.520188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.716 qpair failed and we were unable to recover it. 00:32:47.716 [2024-11-20 14:51:59.520292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.716 [2024-11-20 14:51:59.520323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.716 qpair failed and we were unable to recover it. 00:32:47.716 [2024-11-20 14:51:59.520585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.716 [2024-11-20 14:51:59.520616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.716 qpair failed and we were unable to recover it. 00:32:47.716 [2024-11-20 14:51:59.520745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.717 [2024-11-20 14:51:59.520776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.717 qpair failed and we were unable to recover it. 00:32:47.717 [2024-11-20 14:51:59.521024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.717 [2024-11-20 14:51:59.521058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.717 qpair failed and we were unable to recover it. 00:32:47.717 [2024-11-20 14:51:59.521315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.717 [2024-11-20 14:51:59.521346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.717 qpair failed and we were unable to recover it. 00:32:47.717 [2024-11-20 14:51:59.521610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.717 [2024-11-20 14:51:59.521642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.717 qpair failed and we were unable to recover it. 00:32:47.717 [2024-11-20 14:51:59.521923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.717 [2024-11-20 14:51:59.521967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.717 qpair failed and we were unable to recover it. 00:32:47.717 [2024-11-20 14:51:59.522112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.717 [2024-11-20 14:51:59.522144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.717 qpair failed and we were unable to recover it. 00:32:47.717 [2024-11-20 14:51:59.522280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.717 [2024-11-20 14:51:59.522312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.717 qpair failed and we were unable to recover it. 00:32:47.717 [2024-11-20 14:51:59.522550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.717 [2024-11-20 14:51:59.522581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.717 qpair failed and we were unable to recover it. 00:32:47.717 [2024-11-20 14:51:59.522769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.717 [2024-11-20 14:51:59.522801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.718 qpair failed and we were unable to recover it. 00:32:47.718 [2024-11-20 14:51:59.522990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.718 [2024-11-20 14:51:59.523023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.718 qpair failed and we were unable to recover it. 00:32:47.718 [2024-11-20 14:51:59.523216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.718 [2024-11-20 14:51:59.523248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.718 qpair failed and we were unable to recover it. 00:32:47.718 [2024-11-20 14:51:59.523469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.718 [2024-11-20 14:51:59.523500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.718 qpair failed and we were unable to recover it. 00:32:47.718 [2024-11-20 14:51:59.523736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.718 [2024-11-20 14:51:59.523768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.718 qpair failed and we were unable to recover it. 00:32:47.718 [2024-11-20 14:51:59.524025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.718 [2024-11-20 14:51:59.524058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.718 qpair failed and we were unable to recover it. 00:32:47.718 [2024-11-20 14:51:59.524252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.718 [2024-11-20 14:51:59.524283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.718 qpair failed and we were unable to recover it. 00:32:47.718 [2024-11-20 14:51:59.524547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.719 [2024-11-20 14:51:59.524578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.719 qpair failed and we were unable to recover it. 00:32:47.719 [2024-11-20 14:51:59.524865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.719 [2024-11-20 14:51:59.524897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.719 qpair failed and we were unable to recover it. 00:32:47.719 [2024-11-20 14:51:59.525177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.719 [2024-11-20 14:51:59.525210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.719 qpair failed and we were unable to recover it. 00:32:47.719 [2024-11-20 14:51:59.525352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.719 [2024-11-20 14:51:59.525389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.719 qpair failed and we were unable to recover it. 00:32:47.719 [2024-11-20 14:51:59.525674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.719 [2024-11-20 14:51:59.525705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.719 qpair failed and we were unable to recover it. 00:32:47.719 [2024-11-20 14:51:59.525894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.719 [2024-11-20 14:51:59.525926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.719 qpair failed and we were unable to recover it. 00:32:47.719 [2024-11-20 14:51:59.526116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.720 [2024-11-20 14:51:59.526148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.720 qpair failed and we were unable to recover it. 00:32:47.720 [2024-11-20 14:51:59.526288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.720 [2024-11-20 14:51:59.526319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.720 qpair failed and we were unable to recover it. 00:32:47.720 [2024-11-20 14:51:59.526509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.720 [2024-11-20 14:51:59.526541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.720 qpair failed and we were unable to recover it. 00:32:47.720 [2024-11-20 14:51:59.526735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.720 [2024-11-20 14:51:59.526767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.720 qpair failed and we were unable to recover it. 00:32:47.720 [2024-11-20 14:51:59.527041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.720 [2024-11-20 14:51:59.527074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.720 qpair failed and we were unable to recover it. 00:32:47.720 [2024-11-20 14:51:59.527207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.720 [2024-11-20 14:51:59.527239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.720 qpair failed and we were unable to recover it. 00:32:47.720 [2024-11-20 14:51:59.527360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.720 [2024-11-20 14:51:59.527392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.720 qpair failed and we were unable to recover it. 00:32:47.720 [2024-11-20 14:51:59.527630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.720 [2024-11-20 14:51:59.527661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.720 qpair failed and we were unable to recover it. 00:32:47.720 [2024-11-20 14:51:59.527966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.721 [2024-11-20 14:51:59.527999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.721 qpair failed and we were unable to recover it. 00:32:47.721 [2024-11-20 14:51:59.528206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.721 [2024-11-20 14:51:59.528237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.721 qpair failed and we were unable to recover it. 00:32:47.721 [2024-11-20 14:51:59.528523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.721 [2024-11-20 14:51:59.528554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.721 qpair failed and we were unable to recover it. 00:32:47.721 [2024-11-20 14:51:59.528799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.721 [2024-11-20 14:51:59.528832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.721 qpair failed and we were unable to recover it. 00:32:47.721 [2024-11-20 14:51:59.529060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.721 [2024-11-20 14:51:59.529093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.721 qpair failed and we were unable to recover it. 00:32:47.721 [2024-11-20 14:51:59.529357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.722 [2024-11-20 14:51:59.529388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.722 qpair failed and we were unable to recover it. 00:32:47.722 [2024-11-20 14:51:59.529518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.722 [2024-11-20 14:51:59.529549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.722 qpair failed and we were unable to recover it. 00:32:47.722 [2024-11-20 14:51:59.529792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.722 [2024-11-20 14:51:59.529823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.722 qpair failed and we were unable to recover it. 00:32:47.722 [2024-11-20 14:51:59.530018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.722 [2024-11-20 14:51:59.530050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.722 qpair failed and we were unable to recover it. 00:32:47.722 [2024-11-20 14:51:59.530232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.722 [2024-11-20 14:51:59.530262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.722 qpair failed and we were unable to recover it. 00:32:47.722 [2024-11-20 14:51:59.530399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.722 [2024-11-20 14:51:59.530430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.722 qpair failed and we were unable to recover it. 00:32:47.722 [2024-11-20 14:51:59.530719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.723 [2024-11-20 14:51:59.530752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.723 qpair failed and we were unable to recover it. 00:32:47.723 [2024-11-20 14:51:59.531057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.723 [2024-11-20 14:51:59.531090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.723 qpair failed and we were unable to recover it. 00:32:47.723 [2024-11-20 14:51:59.531231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.723 [2024-11-20 14:51:59.531262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.723 qpair failed and we were unable to recover it. 00:32:47.723 [2024-11-20 14:51:59.531390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.723 [2024-11-20 14:51:59.531422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.723 qpair failed and we were unable to recover it. 00:32:47.723 [2024-11-20 14:51:59.531677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.723 [2024-11-20 14:51:59.531709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:47.723 qpair failed and we were unable to recover it. 00:32:47.723 [2024-11-20 14:51:59.531998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.723 [2024-11-20 14:51:59.532068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.723 qpair failed and we were unable to recover it. 00:32:47.723 [2024-11-20 14:51:59.532343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.723 [2024-11-20 14:51:59.532378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.723 qpair failed and we were unable to recover it. 00:32:47.723 [2024-11-20 14:51:59.532627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.723 [2024-11-20 14:51:59.532664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.723 qpair failed and we were unable to recover it. 00:32:47.723 [2024-11-20 14:51:59.532918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.723 [2024-11-20 14:51:59.532967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.723 qpair failed and we were unable to recover it. 00:32:47.723 [2024-11-20 14:51:59.533162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.723 [2024-11-20 14:51:59.533194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.724 qpair failed and we were unable to recover it. 00:32:47.724 [2024-11-20 14:51:59.533338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.724 [2024-11-20 14:51:59.533370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.724 qpair failed and we were unable to recover it. 00:32:47.724 [2024-11-20 14:51:59.533558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.724 [2024-11-20 14:51:59.533590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.724 qpair failed and we were unable to recover it. 00:32:47.724 [2024-11-20 14:51:59.533887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.724 [2024-11-20 14:51:59.533923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.724 qpair failed and we were unable to recover it. 00:32:47.724 [2024-11-20 14:51:59.534154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.724 [2024-11-20 14:51:59.534188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.724 qpair failed and we were unable to recover it. 00:32:47.724 [2024-11-20 14:51:59.534422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.724 [2024-11-20 14:51:59.534452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.724 qpair failed and we were unable to recover it. 00:32:47.724 [2024-11-20 14:51:59.534658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.724 [2024-11-20 14:51:59.534689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.724 qpair failed and we were unable to recover it. 00:32:47.724 [2024-11-20 14:51:59.534904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.724 [2024-11-20 14:51:59.534939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.724 qpair failed and we were unable to recover it. 00:32:47.724 [2024-11-20 14:51:59.535172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.724 [2024-11-20 14:51:59.535204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.724 qpair failed and we were unable to recover it. 00:32:47.725 [2024-11-20 14:51:59.535336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.725 [2024-11-20 14:51:59.535368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.725 qpair failed and we were unable to recover it. 00:32:47.725 [2024-11-20 14:51:59.535567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.725 [2024-11-20 14:51:59.535598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.725 qpair failed and we were unable to recover it. 00:32:47.725 [2024-11-20 14:51:59.535836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.725 [2024-11-20 14:51:59.535879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.725 qpair failed and we were unable to recover it. 00:32:47.725 [2024-11-20 14:51:59.536157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.725 [2024-11-20 14:51:59.536193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.725 qpair failed and we were unable to recover it. 00:32:47.725 [2024-11-20 14:51:59.536456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.726 [2024-11-20 14:51:59.536487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.726 qpair failed and we were unable to recover it. 00:32:47.726 [2024-11-20 14:51:59.536718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.726 [2024-11-20 14:51:59.536750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.726 qpair failed and we were unable to recover it. 00:32:47.726 [2024-11-20 14:51:59.537000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.726 [2024-11-20 14:51:59.537046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.726 qpair failed and we were unable to recover it. 00:32:47.726 [2024-11-20 14:51:59.537259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.726 [2024-11-20 14:51:59.537291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.726 qpair failed and we were unable to recover it. 00:32:47.726 [2024-11-20 14:51:59.537479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.726 [2024-11-20 14:51:59.537511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.726 qpair failed and we were unable to recover it. 00:32:47.726 [2024-11-20 14:51:59.537721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.726 [2024-11-20 14:51:59.537752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.726 qpair failed and we were unable to recover it. 00:32:47.726 [2024-11-20 14:51:59.537994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.726 [2024-11-20 14:51:59.538027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.726 qpair failed and we were unable to recover it. 00:32:47.726 [2024-11-20 14:51:59.538270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.726 [2024-11-20 14:51:59.538306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.726 qpair failed and we were unable to recover it. 00:32:47.726 [2024-11-20 14:51:59.538512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.726 [2024-11-20 14:51:59.538544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.726 qpair failed and we were unable to recover it. 00:32:47.726 [2024-11-20 14:51:59.538809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.726 [2024-11-20 14:51:59.538840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.726 qpair failed and we were unable to recover it. 00:32:47.726 [2024-11-20 14:51:59.539136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.727 [2024-11-20 14:51:59.539183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.727 qpair failed and we were unable to recover it. 00:32:47.727 [2024-11-20 14:51:59.539353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.727 [2024-11-20 14:51:59.539389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.727 qpair failed and we were unable to recover it. 00:32:47.727 [2024-11-20 14:51:59.539622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.727 [2024-11-20 14:51:59.539654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.727 qpair failed and we were unable to recover it. 00:32:47.727 [2024-11-20 14:51:59.539892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.727 [2024-11-20 14:51:59.539924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.727 qpair failed and we were unable to recover it. 00:32:47.727 [2024-11-20 14:51:59.540177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.727 [2024-11-20 14:51:59.540210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.727 qpair failed and we were unable to recover it. 00:32:47.727 [2024-11-20 14:51:59.540412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.727 [2024-11-20 14:51:59.540449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.727 qpair failed and we were unable to recover it. 00:32:47.727 [2024-11-20 14:51:59.540742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.727 [2024-11-20 14:51:59.540773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.727 qpair failed and we were unable to recover it. 00:32:47.727 [2024-11-20 14:51:59.540973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.727 [2024-11-20 14:51:59.541007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.727 qpair failed and we were unable to recover it. 00:32:47.727 [2024-11-20 14:51:59.541160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.727 [2024-11-20 14:51:59.541191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.727 qpair failed and we were unable to recover it. 00:32:47.727 [2024-11-20 14:51:59.541407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.727 [2024-11-20 14:51:59.541437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.727 qpair failed and we were unable to recover it. 00:32:47.727 [2024-11-20 14:51:59.541787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.727 [2024-11-20 14:51:59.541823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.727 qpair failed and we were unable to recover it. 00:32:47.727 [2024-11-20 14:51:59.541969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.728 [2024-11-20 14:51:59.542003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.728 qpair failed and we were unable to recover it. 00:32:47.728 [2024-11-20 14:51:59.542264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.728 [2024-11-20 14:51:59.542295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.728 qpair failed and we were unable to recover it. 00:32:47.728 [2024-11-20 14:51:59.542428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.728 [2024-11-20 14:51:59.542460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.728 qpair failed and we were unable to recover it. 00:32:47.728 [2024-11-20 14:51:59.542692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.728 [2024-11-20 14:51:59.542727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.728 qpair failed and we were unable to recover it. 00:32:47.728 [2024-11-20 14:51:59.542983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.728 [2024-11-20 14:51:59.543018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.728 qpair failed and we were unable to recover it. 00:32:47.728 [2024-11-20 14:51:59.543238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.728 [2024-11-20 14:51:59.543270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.728 qpair failed and we were unable to recover it. 00:32:47.728 [2024-11-20 14:51:59.543404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.728 [2024-11-20 14:51:59.543437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.728 qpair failed and we were unable to recover it. 00:32:47.728 [2024-11-20 14:51:59.543781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.728 [2024-11-20 14:51:59.543817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.728 qpair failed and we were unable to recover it. 00:32:47.728 [2024-11-20 14:51:59.544004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.728 [2024-11-20 14:51:59.544037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.728 qpair failed and we were unable to recover it. 00:32:47.728 [2024-11-20 14:51:59.544234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.544267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.544506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.544538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.544748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.544785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.545075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.545112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.545261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.545293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.545477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.545509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.545820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.545851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.545996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.546034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.546286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.546319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.546557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.546588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.546862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.546895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.547200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.547237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.547367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.547400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.547722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.547754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.547944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.547986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.548229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.548264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.548536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.548568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.548767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.548800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.549080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.549114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.549394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.549430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.729 [2024-11-20 14:51:59.549617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.729 [2024-11-20 14:51:59.549649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.729 qpair failed and we were unable to recover it. 00:32:47.730 [2024-11-20 14:51:59.549974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.730 [2024-11-20 14:51:59.550008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.730 qpair failed and we were unable to recover it. 00:32:47.730 [2024-11-20 14:51:59.550205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.730 [2024-11-20 14:51:59.550237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.730 qpair failed and we were unable to recover it. 00:32:47.730 [2024-11-20 14:51:59.550440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.730 [2024-11-20 14:51:59.550476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.730 qpair failed and we were unable to recover it. 00:32:47.730 [2024-11-20 14:51:59.550675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.730 [2024-11-20 14:51:59.550706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.730 qpair failed and we were unable to recover it. 00:32:47.730 [2024-11-20 14:51:59.550977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.730 [2024-11-20 14:51:59.551010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.730 qpair failed and we were unable to recover it. 00:32:47.730 [2024-11-20 14:51:59.551199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.730 [2024-11-20 14:51:59.551230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.730 qpair failed and we were unable to recover it. 00:32:47.730 [2024-11-20 14:51:59.551490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.730 [2024-11-20 14:51:59.551533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.730 qpair failed and we were unable to recover it. 00:32:47.730 [2024-11-20 14:51:59.551723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.730 [2024-11-20 14:51:59.551754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.730 qpair failed and we were unable to recover it. 00:32:47.730 [2024-11-20 14:51:59.552013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.731 [2024-11-20 14:51:59.552046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.731 qpair failed and we were unable to recover it. 00:32:47.731 [2024-11-20 14:51:59.552288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.731 [2024-11-20 14:51:59.552319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.731 qpair failed and we were unable to recover it. 00:32:47.731 [2024-11-20 14:51:59.552586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.731 [2024-11-20 14:51:59.552629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.731 qpair failed and we were unable to recover it. 00:32:47.731 [2024-11-20 14:51:59.552833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.731 [2024-11-20 14:51:59.552865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.731 qpair failed and we were unable to recover it. 00:32:47.731 [2024-11-20 14:51:59.553142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.731 [2024-11-20 14:51:59.553175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.731 qpair failed and we were unable to recover it. 00:32:47.731 [2024-11-20 14:51:59.553312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.731 [2024-11-20 14:51:59.553343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.731 qpair failed and we were unable to recover it. 00:32:47.731 [2024-11-20 14:51:59.553537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.731 [2024-11-20 14:51:59.553569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.731 qpair failed and we were unable to recover it. 00:32:47.731 [2024-11-20 14:51:59.553758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.732 [2024-11-20 14:51:59.553799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.732 qpair failed and we were unable to recover it. 00:32:47.732 [2024-11-20 14:51:59.553996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.732 [2024-11-20 14:51:59.554029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.732 qpair failed and we were unable to recover it. 00:32:47.732 [2024-11-20 14:51:59.554202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.732 [2024-11-20 14:51:59.554233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.732 qpair failed and we were unable to recover it. 00:32:47.732 [2024-11-20 14:51:59.554472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.732 [2024-11-20 14:51:59.554503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.732 qpair failed and we were unable to recover it. 00:32:47.732 [2024-11-20 14:51:59.554764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.732 [2024-11-20 14:51:59.554796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.732 qpair failed and we were unable to recover it. 00:32:47.732 [2024-11-20 14:51:59.555004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.732 [2024-11-20 14:51:59.555040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.732 qpair failed and we were unable to recover it. 00:32:47.732 [2024-11-20 14:51:59.555176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.732 [2024-11-20 14:51:59.555207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.732 qpair failed and we were unable to recover it. 00:32:47.733 [2024-11-20 14:51:59.555418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.733 [2024-11-20 14:51:59.555449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.733 qpair failed and we were unable to recover it. 00:32:47.733 [2024-11-20 14:51:59.555664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.733 [2024-11-20 14:51:59.555695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.733 qpair failed and we were unable to recover it. 00:32:47.733 [2024-11-20 14:51:59.555822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.733 [2024-11-20 14:51:59.555852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.733 qpair failed and we were unable to recover it. 00:32:47.733 [2024-11-20 14:51:59.556113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.733 [2024-11-20 14:51:59.556146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.733 qpair failed and we were unable to recover it. 00:32:47.733 [2024-11-20 14:51:59.556281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.733 [2024-11-20 14:51:59.556312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.733 qpair failed and we were unable to recover it. 00:32:47.733 [2024-11-20 14:51:59.556494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.733 [2024-11-20 14:51:59.556532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.733 qpair failed and we were unable to recover it. 00:32:47.733 [2024-11-20 14:51:59.556743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.733 [2024-11-20 14:51:59.556775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.733 qpair failed and we were unable to recover it. 00:32:47.733 [2024-11-20 14:51:59.556923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.733 [2024-11-20 14:51:59.556974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.733 qpair failed and we were unable to recover it. 00:32:47.733 [2024-11-20 14:51:59.557134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.733 [2024-11-20 14:51:59.557166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.733 qpair failed and we were unable to recover it. 00:32:47.733 [2024-11-20 14:51:59.557351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.733 [2024-11-20 14:51:59.557381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.733 qpair failed and we were unable to recover it. 00:32:47.733 [2024-11-20 14:51:59.557637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.733 [2024-11-20 14:51:59.557669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.733 qpair failed and we were unable to recover it. 00:32:47.734 [2024-11-20 14:51:59.557850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.734 [2024-11-20 14:51:59.557884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.734 qpair failed and we were unable to recover it. 00:32:47.734 [2024-11-20 14:51:59.558064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.734 [2024-11-20 14:51:59.558108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.734 qpair failed and we were unable to recover it. 00:32:47.734 [2024-11-20 14:51:59.558354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.734 [2024-11-20 14:51:59.558386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.734 qpair failed and we were unable to recover it. 00:32:47.734 [2024-11-20 14:51:59.558563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.734 [2024-11-20 14:51:59.558595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.734 qpair failed and we were unable to recover it. 00:32:47.734 [2024-11-20 14:51:59.558859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.734 [2024-11-20 14:51:59.558890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.734 qpair failed and we were unable to recover it. 00:32:47.734 [2024-11-20 14:51:59.559046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.734 [2024-11-20 14:51:59.559078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.734 qpair failed and we were unable to recover it. 00:32:47.734 [2024-11-20 14:51:59.559213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.734 [2024-11-20 14:51:59.559255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.734 qpair failed and we were unable to recover it. 00:32:47.734 [2024-11-20 14:51:59.559498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.734 [2024-11-20 14:51:59.559530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.734 qpair failed and we were unable to recover it. 00:32:47.735 [2024-11-20 14:51:59.559731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.735 [2024-11-20 14:51:59.559762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.735 qpair failed and we were unable to recover it. 00:32:47.735 [2024-11-20 14:51:59.560063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.735 [2024-11-20 14:51:59.560097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.735 qpair failed and we were unable to recover it. 00:32:47.735 [2024-11-20 14:51:59.560236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.735 [2024-11-20 14:51:59.560268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.735 qpair failed and we were unable to recover it. 00:32:47.735 [2024-11-20 14:51:59.560481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.735 [2024-11-20 14:51:59.560516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.735 qpair failed and we were unable to recover it. 00:32:47.735 [2024-11-20 14:51:59.560712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.735 [2024-11-20 14:51:59.560744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.735 qpair failed and we were unable to recover it. 00:32:47.735 [2024-11-20 14:51:59.561037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.735 [2024-11-20 14:51:59.561071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.735 qpair failed and we were unable to recover it. 00:32:47.735 [2024-11-20 14:51:59.561271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.735 [2024-11-20 14:51:59.561303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.735 qpair failed and we were unable to recover it. 00:32:47.735 [2024-11-20 14:51:59.561427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.735 [2024-11-20 14:51:59.561466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.735 qpair failed and we were unable to recover it. 00:32:47.735 [2024-11-20 14:51:59.561736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.736 [2024-11-20 14:51:59.561768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.736 qpair failed and we were unable to recover it. 00:32:47.736 [2024-11-20 14:51:59.561969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.736 [2024-11-20 14:51:59.562004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.736 qpair failed and we were unable to recover it. 00:32:47.736 [2024-11-20 14:51:59.562245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.736 [2024-11-20 14:51:59.562276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.736 qpair failed and we were unable to recover it. 00:32:47.736 [2024-11-20 14:51:59.562459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.736 [2024-11-20 14:51:59.562490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.736 qpair failed and we were unable to recover it. 00:32:47.736 [2024-11-20 14:51:59.562799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.736 [2024-11-20 14:51:59.562836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.736 qpair failed and we were unable to recover it. 00:32:47.736 [2024-11-20 14:51:59.563073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.736 [2024-11-20 14:51:59.563106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.736 qpair failed and we were unable to recover it. 00:32:47.736 [2024-11-20 14:51:59.563311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.736 [2024-11-20 14:51:59.563343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.736 qpair failed and we were unable to recover it. 00:32:47.736 [2024-11-20 14:51:59.563541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.736 [2024-11-20 14:51:59.563573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.737 qpair failed and we were unable to recover it. 00:32:47.737 [2024-11-20 14:51:59.563849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.737 [2024-11-20 14:51:59.563885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.737 qpair failed and we were unable to recover it. 00:32:47.737 [2024-11-20 14:51:59.564033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.737 [2024-11-20 14:51:59.564066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.737 qpair failed and we were unable to recover it. 00:32:47.737 [2024-11-20 14:51:59.564281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.737 [2024-11-20 14:51:59.564312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.737 qpair failed and we were unable to recover it. 00:32:47.737 [2024-11-20 14:51:59.564459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.737 [2024-11-20 14:51:59.564490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.737 qpair failed and we were unable to recover it. 00:32:47.737 [2024-11-20 14:51:59.564612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.737 [2024-11-20 14:51:59.564643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.737 qpair failed and we were unable to recover it. 00:32:47.737 [2024-11-20 14:51:59.564770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.737 [2024-11-20 14:51:59.564812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.737 qpair failed and we were unable to recover it. 00:32:47.737 [2024-11-20 14:51:59.565065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.737 [2024-11-20 14:51:59.565099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.737 qpair failed and we were unable to recover it. 00:32:47.737 [2024-11-20 14:51:59.565290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.737 [2024-11-20 14:51:59.565321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.737 qpair failed and we were unable to recover it. 00:32:47.737 [2024-11-20 14:51:59.565520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.737 [2024-11-20 14:51:59.565550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.737 qpair failed and we were unable to recover it. 00:32:47.737 [2024-11-20 14:51:59.565729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.738 [2024-11-20 14:51:59.565760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.738 qpair failed and we were unable to recover it. 00:32:47.738 [2024-11-20 14:51:59.565969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.738 [2024-11-20 14:51:59.566007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.738 qpair failed and we were unable to recover it. 00:32:47.738 [2024-11-20 14:51:59.566160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.738 [2024-11-20 14:51:59.566199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.738 qpair failed and we were unable to recover it. 00:32:47.738 [2024-11-20 14:51:59.566345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.738 [2024-11-20 14:51:59.566376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.738 qpair failed and we were unable to recover it. 00:32:47.738 [2024-11-20 14:51:59.566592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.738 [2024-11-20 14:51:59.566624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.738 qpair failed and we were unable to recover it. 00:32:47.738 [2024-11-20 14:51:59.566813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.738 [2024-11-20 14:51:59.566844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.738 qpair failed and we were unable to recover it. 00:32:47.738 [2024-11-20 14:51:59.567047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.738 [2024-11-20 14:51:59.567083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.738 qpair failed and we were unable to recover it. 00:32:47.738 [2024-11-20 14:51:59.567218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.738 [2024-11-20 14:51:59.567249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.738 qpair failed and we were unable to recover it. 00:32:47.738 [2024-11-20 14:51:59.567525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.738 [2024-11-20 14:51:59.567557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.738 qpair failed and we were unable to recover it. 00:32:47.738 [2024-11-20 14:51:59.567727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.738 [2024-11-20 14:51:59.567759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.738 qpair failed and we were unable to recover it. 00:32:47.739 [2024-11-20 14:51:59.567889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.739 [2024-11-20 14:51:59.567921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.739 qpair failed and we were unable to recover it. 00:32:47.739 [2024-11-20 14:51:59.568131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.739 [2024-11-20 14:51:59.568170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.739 qpair failed and we were unable to recover it. 00:32:47.739 [2024-11-20 14:51:59.568372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.739 [2024-11-20 14:51:59.568404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.739 qpair failed and we were unable to recover it. 00:32:47.739 [2024-11-20 14:51:59.568618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.739 [2024-11-20 14:51:59.568648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.739 qpair failed and we were unable to recover it. 00:32:47.739 [2024-11-20 14:51:59.568910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.739 [2024-11-20 14:51:59.568941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.739 qpair failed and we were unable to recover it. 00:32:47.739 [2024-11-20 14:51:59.569213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.739 [2024-11-20 14:51:59.569257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.739 qpair failed and we were unable to recover it. 00:32:47.739 [2024-11-20 14:51:59.569537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.739 [2024-11-20 14:51:59.569569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.739 qpair failed and we were unable to recover it. 00:32:47.739 [2024-11-20 14:51:59.569809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.739 [2024-11-20 14:51:59.569841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.739 qpair failed and we were unable to recover it. 00:32:47.739 [2024-11-20 14:51:59.570025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.739 [2024-11-20 14:51:59.570059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.739 qpair failed and we were unable to recover it. 00:32:47.739 [2024-11-20 14:51:59.570305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.739 [2024-11-20 14:51:59.570349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.739 qpair failed and we were unable to recover it. 00:32:47.739 [2024-11-20 14:51:59.570656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.739 [2024-11-20 14:51:59.570688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.739 qpair failed and we were unable to recover it. 00:32:47.739 [2024-11-20 14:51:59.570877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.739 [2024-11-20 14:51:59.570909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.739 qpair failed and we were unable to recover it. 00:32:47.739 [2024-11-20 14:51:59.571212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.739 [2024-11-20 14:51:59.571245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.739 qpair failed and we were unable to recover it. 00:32:47.739 [2024-11-20 14:51:59.571377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.739 [2024-11-20 14:51:59.571407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.739 qpair failed and we were unable to recover it. 00:32:47.740 [2024-11-20 14:51:59.571624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.740 [2024-11-20 14:51:59.571658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.740 qpair failed and we were unable to recover it. 00:32:47.740 [2024-11-20 14:51:59.571959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.740 [2024-11-20 14:51:59.571992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.740 qpair failed and we were unable to recover it. 00:32:47.740 [2024-11-20 14:51:59.572127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.740 [2024-11-20 14:51:59.572159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.740 qpair failed and we were unable to recover it. 00:32:47.740 [2024-11-20 14:51:59.572305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.740 [2024-11-20 14:51:59.572335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.740 qpair failed and we were unable to recover it. 00:32:47.740 [2024-11-20 14:51:59.572507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.740 [2024-11-20 14:51:59.572547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.740 qpair failed and we were unable to recover it. 00:32:47.740 [2024-11-20 14:51:59.572832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.740 [2024-11-20 14:51:59.572872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.740 qpair failed and we were unable to recover it. 00:32:47.740 [2024-11-20 14:51:59.573100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.740 [2024-11-20 14:51:59.573133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.740 qpair failed and we were unable to recover it. 00:32:47.740 [2024-11-20 14:51:59.573279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.740 [2024-11-20 14:51:59.573310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.740 qpair failed and we were unable to recover it. 00:32:47.740 [2024-11-20 14:51:59.573504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.740 [2024-11-20 14:51:59.573535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.740 qpair failed and we were unable to recover it. 00:32:47.740 [2024-11-20 14:51:59.573773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.741 [2024-11-20 14:51:59.573809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.741 qpair failed and we were unable to recover it. 00:32:47.741 [2024-11-20 14:51:59.574017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.741 [2024-11-20 14:51:59.574053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.741 qpair failed and we were unable to recover it. 00:32:47.741 [2024-11-20 14:51:59.574246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.741 [2024-11-20 14:51:59.574278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.741 qpair failed and we were unable to recover it. 00:32:47.741 [2024-11-20 14:51:59.574464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.741 [2024-11-20 14:51:59.574496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.741 qpair failed and we were unable to recover it. 00:32:47.741 [2024-11-20 14:51:59.574769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.741 [2024-11-20 14:51:59.574811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.741 qpair failed and we were unable to recover it. 00:32:47.741 [2024-11-20 14:51:59.575008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.741 [2024-11-20 14:51:59.575042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.741 qpair failed and we were unable to recover it. 00:32:47.741 [2024-11-20 14:51:59.575172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.741 [2024-11-20 14:51:59.575204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.741 qpair failed and we were unable to recover it. 00:32:47.741 [2024-11-20 14:51:59.575414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.741 [2024-11-20 14:51:59.575445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.741 qpair failed and we were unable to recover it. 00:32:47.741 [2024-11-20 14:51:59.575688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.741 [2024-11-20 14:51:59.575720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.741 qpair failed and we were unable to recover it. 00:32:47.741 [2024-11-20 14:51:59.575854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.741 [2024-11-20 14:51:59.575887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.741 qpair failed and we were unable to recover it. 00:32:47.741 [2024-11-20 14:51:59.576178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.741 [2024-11-20 14:51:59.576214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.741 qpair failed and we were unable to recover it. 00:32:47.742 [2024-11-20 14:51:59.576397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.742 [2024-11-20 14:51:59.576428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.742 qpair failed and we were unable to recover it. 00:32:47.742 [2024-11-20 14:51:59.576668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.742 [2024-11-20 14:51:59.576700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.742 qpair failed and we were unable to recover it. 00:32:47.742 [2024-11-20 14:51:59.576885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.742 [2024-11-20 14:51:59.576917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.742 qpair failed and we were unable to recover it. 00:32:47.742 [2024-11-20 14:51:59.577088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.742 [2024-11-20 14:51:59.577125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.742 qpair failed and we were unable to recover it. 00:32:47.742 [2024-11-20 14:51:59.577337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.742 [2024-11-20 14:51:59.577369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.742 qpair failed and we were unable to recover it. 00:32:47.742 [2024-11-20 14:51:59.577585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.742 [2024-11-20 14:51:59.577615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.742 qpair failed and we were unable to recover it. 00:32:47.742 [2024-11-20 14:51:59.577851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.742 [2024-11-20 14:51:59.577882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.742 qpair failed and we were unable to recover it. 00:32:47.742 [2024-11-20 14:51:59.578084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.742 [2024-11-20 14:51:59.578127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.742 qpair failed and we were unable to recover it. 00:32:47.742 [2024-11-20 14:51:59.578288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.742 [2024-11-20 14:51:59.578320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.742 qpair failed and we were unable to recover it. 00:32:47.742 [2024-11-20 14:51:59.578518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.578551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.578826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.578857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.579044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.579078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.579275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.579309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.579526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.579558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.579818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.579851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.580043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.580075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.580326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.580370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.580683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.580715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.580966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.580999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.581195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.581226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.581501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.581538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.581847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.581879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.582090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.582124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.582271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.582302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.582433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.582464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.582729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.582765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.582964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.583003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.583266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.583299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.583599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.583630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.583855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.583891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.584176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.584210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.584411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.584443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.584686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.584717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.584901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.584935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.585227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.585260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.585478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.585509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.585653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.585684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.585961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.585999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.586228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.586260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.586539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.586570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.586765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.743 [2024-11-20 14:51:59.586797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.743 qpair failed and we were unable to recover it. 00:32:47.743 [2024-11-20 14:51:59.587020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.587066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.587261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.587291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.587548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.587581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.587777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.587809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.588071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.588107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.588405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.588440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.588673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.588704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.588959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.588993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.589190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.589223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.589514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.589550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.589789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.589819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.590007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.590039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.590167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.590212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.590361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.590404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.590622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.590654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.590785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.590817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.590937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.590980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.591102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.591133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.591333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.591364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.591560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.591591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.591857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.591889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.592168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.592201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.592486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.592518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.592665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.592699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.592892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.592923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.593089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.593121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.593429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.593462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.593765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.593797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.594070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.594103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.594367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.594398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.594682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.594714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.594987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.595020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.744 [2024-11-20 14:51:59.595288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.744 [2024-11-20 14:51:59.595319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.744 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.595466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.595497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.595763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.595795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.596036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.596071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.596294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.596327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.596473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.596506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.596710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.596742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.597075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.597121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.597401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.597435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.597639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.597670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.597867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.597899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.598123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.598159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.598323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.598359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.598564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.598596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.598859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.598890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.599116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.599151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.599428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.599462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.599672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.599704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.599982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.600015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.600220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.600252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.600449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.600486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:47.745 [2024-11-20 14:51:59.600748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.745 [2024-11-20 14:51:59.600786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:47.745 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.601042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.601077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.601348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.601379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.601649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.601683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.601985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.602019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.602208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.602240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.602462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.602497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.602757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.602794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.603038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.603072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.603298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.603332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.603536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.603568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.603793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.603832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.604038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.604073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.604285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.604322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.604615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.604650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.604856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.604901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.605086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.605129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.605267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.605307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.605451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.605488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.605783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.605817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.606015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.606074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.606343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.606378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.606594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.606628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.606831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.606869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.607082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.607125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.607288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.607323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.607468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.607502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.607756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.607798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.608070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.608105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.608314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.608349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.608538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.608570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.608767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.608799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.609006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.609041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.609312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.026 [2024-11-20 14:51:59.609347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.026 qpair failed and we were unable to recover it. 00:32:48.026 [2024-11-20 14:51:59.609483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.609514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.609788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.609820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.610062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.610097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.610365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.610409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.610553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.610586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.610884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.610917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.611137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.611170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.611502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.611578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.611958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.612033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.612200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.612235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.612431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.612463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.612669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.612701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.612986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.613019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.613265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.613298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.613602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.613634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.613896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.613927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.614097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.614129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.614335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.614366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.614610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.614641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.614834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.614866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.615075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.615119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.615367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.615398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.615591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.615621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.615898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.615930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.616097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.616128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.616402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.616434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.616676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.616708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.616829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.616860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.617150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.617182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.617460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.617490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.617781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.617813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.618013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.618045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.618298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.618329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.618474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.618505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.618712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.027 [2024-11-20 14:51:59.618743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.027 qpair failed and we were unable to recover it. 00:32:48.027 [2024-11-20 14:51:59.618942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.618981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.619172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.619204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.619453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.619483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.619787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.619817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.619969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.620001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.620271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.620304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.620614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.620645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.620821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.620851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.621055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.621088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.621217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.621249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.621505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.621536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.621817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.621849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.622135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.622169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.622368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.622400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.622614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.622645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.622854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.622884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.623103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.623135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.623341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.623373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.623697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.623729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.623965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.623998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.624219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.624249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.624524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.624555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.624836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.624868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.625084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.625117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.625373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.625404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.625707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.625745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.625957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.625990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.626268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.626299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.626434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.626465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.626667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.626699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.627005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.627036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.627302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.627334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.627554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.627585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.627787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.627818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.628095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.628128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.628382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.028 [2024-11-20 14:51:59.628414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.028 qpair failed and we were unable to recover it. 00:32:48.028 [2024-11-20 14:51:59.628618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.628648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.628897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.628929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.629157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.629189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.629439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.629472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.629738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.629769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.629967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.630000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.630192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.630223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.630473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.630504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.630724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.630755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.630911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.630943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.631141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.631172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.631446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.631478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.631748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.631780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.631990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.632023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.632267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.632299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.632578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.632609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.632822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.632855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.633085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.633117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.633312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.633344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.633650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.633681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.633808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.633840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.634095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.634128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.634408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.634440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.634724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.634755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.634963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.634995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.635256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.635288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.635480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.635512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.635709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.635742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.635994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.636027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.636230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.636262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.636537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.636568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.636857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.636888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.637173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.637207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.637489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.637520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.637820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.637851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.638071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.638104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.029 [2024-11-20 14:51:59.638396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.029 [2024-11-20 14:51:59.638428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.029 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.638647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.638679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.638963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.638995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.639231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.639263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.639516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.639548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.639804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.639835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.640030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.640064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.640254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.640285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.640539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.640570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.640843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.640875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.641132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.641164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.641423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.641455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.641751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.641782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.642060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.642092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.642342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.642374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.642644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.642676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.642818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.642850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.643151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.643183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.643429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.643459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.643661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.643693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.643942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.643992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.644275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.644306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.644525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.644557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.644845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.644877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.645158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.645191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.645376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.645408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.645626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.645657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.645886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.645918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.646184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.646217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.646514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.646546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.030 [2024-11-20 14:51:59.646839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.030 [2024-11-20 14:51:59.646871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.030 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.647150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.647183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.647474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.647505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.647699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.647730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.648025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.648058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.648269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.648300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.648481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.648513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.648689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.648721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.648998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.649030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.649318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.649349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.649578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.649610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.649860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.649892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.650127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.650161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.650435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.650468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.650694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.650725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.650961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.650994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.651273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.651305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.651468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.651500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.651781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.651812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.652083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.652116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.652346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.652378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.652605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.652635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.652836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.652867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.653121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.653154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.653370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.653402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.653661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.653692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.653872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.653903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.654116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.654149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.654359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.654391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.654653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.654685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.654944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.654990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.655173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.655204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.655402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.655434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.655708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.655739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.655998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.656030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.656280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.656312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.656586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.656618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.656772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.031 [2024-11-20 14:51:59.656803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.031 qpair failed and we were unable to recover it. 00:32:48.031 [2024-11-20 14:51:59.656998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.657032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.657304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.657335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.657525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.657556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.657749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.657781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.657994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.658027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.658229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.658260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.658518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.658550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.658766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.658798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.658960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.658993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.659212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.659243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.659524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.659555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.659744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.659776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.659971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.660003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.660200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.660232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.660442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.660472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.660748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.660778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.661032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.661066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.661213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.661244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.661458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.661490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.661689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.661721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.661979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.662010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.662190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.662221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.662492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.662523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.662748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.662778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.663008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.663040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.663218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.663249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.663499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.663531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.663833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.663864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.664159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.664192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.664466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.664498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.664790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.664820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.665021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.665054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.665241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.665279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.665498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.665530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.665746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.665777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.665978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.666010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.666213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.666244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.666517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.032 [2024-11-20 14:51:59.666549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.032 qpair failed and we were unable to recover it. 00:32:48.032 [2024-11-20 14:51:59.666813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.666844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.667103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.667137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.667437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.667467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.667696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.667727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.667935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.667977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.668264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.668295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.668491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.668523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.668704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.668735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.668992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.669026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.669222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.669252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.669437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.669468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.669756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.669789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.669988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.670021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.670223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.670254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.670530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.670562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.670762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.670793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.671050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.671083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.671356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.671386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.671670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.671701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.671991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.672023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.672221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.672253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.672441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.672473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.672663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.672695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.672969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.673003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.673278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.673310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.673505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.673536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.673804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.673836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.674088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.674121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.674331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.674363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.674633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.674664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.674968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.675001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.675265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.675297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.675440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.675472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.675688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.675719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.675988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.676026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 [2024-11-20 14:51:59.676323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.676356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1755768 Killed "${NVMF_APP[@]}" "$@" 00:32:48.033 [2024-11-20 14:51:59.676632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.033 [2024-11-20 14:51:59.676665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.033 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.676926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.676968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.677265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.677298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 14:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:32:48.034 [2024-11-20 14:51:59.677438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.677471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.677665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.677699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 14:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:48.034 [2024-11-20 14:51:59.677921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.677963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 14:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:48.034 [2024-11-20 14:51:59.678235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.678269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 14:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:48.034 [2024-11-20 14:51:59.678546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.678579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 14:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.034 [2024-11-20 14:51:59.678779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.678813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.679090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.679123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.679309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.679342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.679523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.679555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.679763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.679793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.680075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.680108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.680408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.680439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.680705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.680737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.680999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.681032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.681331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.681363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.681507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.681538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.681813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.681846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.682070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.682102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.682317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.682349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.682608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.682647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.682835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.682866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.683060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.683092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.683344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.683376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.683502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.683533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.683829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.683861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.684040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.684072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.684377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.684409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.684763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.684795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.685142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.685174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.685304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.685335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 [2024-11-20 14:51:59.685576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.685609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 14:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1756486 00:32:48.034 [2024-11-20 14:51:59.685826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.685859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.034 14:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1756486 00:32:48.034 [2024-11-20 14:51:59.686074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.034 [2024-11-20 14:51:59.686109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.034 qpair failed and we were unable to recover it. 00:32:48.035 14:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:48.035 [2024-11-20 14:51:59.686365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.686398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 14:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1756486 ']' 00:32:48.035 [2024-11-20 14:51:59.686652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.686685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 14:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.035 [2024-11-20 14:51:59.686831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.686864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.687050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.687084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b9 14:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:48.035 0 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.687329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.687363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 14:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.035 [2024-11-20 14:51:59.687561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.687593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 14:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:48.035 [2024-11-20 14:51:59.687777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.687813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 14:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.035 [2024-11-20 14:51:59.688028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.688064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.688272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.688315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.688526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.688557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.688787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.688818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.689028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.689062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.689287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.689321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.689625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.689658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.689987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.690020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.690278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.690309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.690511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.690544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.690763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.690795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.691026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.691062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.691197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.691230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.691436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.691468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.691740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.691778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.691994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.692027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.692233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.692267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.692524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.692556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.692828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.692860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.693117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.693150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.693431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.035 [2024-11-20 14:51:59.693463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.035 qpair failed and we were unable to recover it. 00:32:48.035 [2024-11-20 14:51:59.693678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.693710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.693900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.693933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.694138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.694169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.694312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.694345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.694575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.694608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.694864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.694895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.695137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.695171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.695335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.695368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.695602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.695636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.695843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.695876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.696022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.696058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.696265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.696298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.696428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.696459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.696617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.696650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.696927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.696970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.697114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.697146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.697333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.697364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.697505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.697537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.697772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.697805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.698052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.698084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.698298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.698336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.698524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.698556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.698670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.698702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.698970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.699005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.699227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.699263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.699457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.699489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.699674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.699705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.699926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.699973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.700084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.700115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.700321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.700355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.700616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.700648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.700839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.700870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.701123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.701158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.701366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.701399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.701625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.701658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.701881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.701913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.702142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.702174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.702312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.036 [2024-11-20 14:51:59.702343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.036 qpair failed and we were unable to recover it. 00:32:48.036 [2024-11-20 14:51:59.702589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.702622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.702828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.702858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.703114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.703147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.703290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.703324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.703591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.703623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.703901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.703932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.704224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.704258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.704399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.704431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.704647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.704678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.704895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.704927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.705222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.705255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.705470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.705503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.705807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.705841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.706034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.706068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.706272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.706303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.706559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.706591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.706734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.706766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.706993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.707025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.707281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.707312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.707533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.707565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.707783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.707813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.707926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.707971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.708202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.708240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.708508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.708540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.708766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.708799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.709054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.709087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.709340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.709372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.709582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.709613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.709748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.709780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.709973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.710028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.710185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.710217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.710412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.710443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.710640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.710671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.710867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.710901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.711169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.711202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.711476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.711508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.711756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.711788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.037 [2024-11-20 14:51:59.712058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.037 [2024-11-20 14:51:59.712090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.037 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.712346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.712380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.712633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.712665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.712933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.712977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.713108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.713140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.713289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.713321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.713595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.713628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.713814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.713846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.713977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.714010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.714206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.714238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.714515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.714547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.714761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.714793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.714993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.715026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.715231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.715263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.715380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.715412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.715621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.715653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.715870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.715902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.716055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.716090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.716360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.716391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.716595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.716627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.716750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.716781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.716996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.717032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.717219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.717252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.717456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.717492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.717611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.717643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.717846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.717883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.718091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.718125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.718318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.718350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.718611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.718644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.718909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.718941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.719155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.719187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.719318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.719351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.719536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.719567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.719778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.719811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.719999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.720031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.720284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.720316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.720453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.720486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.720695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.720726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.038 [2024-11-20 14:51:59.720981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.038 [2024-11-20 14:51:59.721014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.038 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.721226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.721259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.721375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.721407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.721607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.721640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.721847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.721878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.722102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.722136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.722344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.722376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.722510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.722542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.722667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.722698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.722826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.722857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.723061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.723094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.723217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.723248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.723437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.723468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.723653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.723687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.723972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.724004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.724196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.724228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.724508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.724540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.724734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.724765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.724888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.724919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.725114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.725192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.725396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.725432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.725720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.725752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.726219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.726263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.726468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.726504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.726656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.726688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.726818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.726850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.727041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.727075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.727341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.727384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.727525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.727575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.727730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.727763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.728040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.728075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.728273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.728303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.728497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.728529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.728778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.728809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.729091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.729124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.729260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.729292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.729540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.729572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.729705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.729739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.729865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.039 [2024-11-20 14:51:59.729896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.039 qpair failed and we were unable to recover it. 00:32:48.039 [2024-11-20 14:51:59.730119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.730153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.730305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.730338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.730504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.730582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.730885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.731000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.731142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.731180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.731369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.731400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.731537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.731568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.731774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.731807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.732005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.732038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.732290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.732322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.732470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.732501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.732639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.732671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.732893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.732923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.733125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.733158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.733290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.733321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.733502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.733540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.733733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.733766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.734020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.734053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.734251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.734283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.734409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.734441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.734629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.734660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.734855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.734887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.735080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.735113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.735244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.735275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.735424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.735454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.735646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.735677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.735872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.735903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.736031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.736063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.736262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.736293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.736516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.736549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.736801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.736832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.736973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.737005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.737200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.737233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.040 qpair failed and we were unable to recover it. 00:32:48.040 [2024-11-20 14:51:59.737415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.040 [2024-11-20 14:51:59.737446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.737590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.737622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.737815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.737847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.737973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.738006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.738138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.738170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.738358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.738391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.738530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.738561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.738812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.738845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.739039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.739071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.739256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.739334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.739529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.739607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.739637] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:32:48.041 [2024-11-20 14:51:59.739697] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:48.041 [2024-11-20 14:51:59.739916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.739987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.740125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.740156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.740458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.740492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.740772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.740808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.740966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.741000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.741129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.741162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.741368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.741402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.741598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.741631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.741837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.741873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.742029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.742065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.742319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.742360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.742653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.742687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.742883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.742920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.743061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.743096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.743298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.743330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.743452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.743485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.743693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.743727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.743913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.743960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.744094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.744128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.744449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.744482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.744606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.744636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.744887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.744918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.745209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.745245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.745373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.745404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.745620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.745662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.745865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.745899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.041 [2024-11-20 14:51:59.746043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.041 [2024-11-20 14:51:59.746077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.041 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.746259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.746292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.746406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.746438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.746571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.746603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.746796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.746829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.746965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.746997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.747248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.747281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.747487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.747519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.747635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.747667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.747863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.747896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.748104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.748138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.748338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.748387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.748571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.748604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.748750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.748783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.748990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.749025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.749305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.749337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.749585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.749617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.749814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.749848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.750046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.750079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.750258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.750291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.750483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.750515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.750718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.750749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.750944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.750986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.751111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.751144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.751343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.751375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.751585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.751619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.751757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.751790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.751990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.752025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.752244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.752277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.752462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.752495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.752641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.752672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.752881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.752915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac24000b90 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.753068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.753109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.753239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.753272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.753398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.753430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.753700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.753732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.753919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.753963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.754140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.754176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.042 qpair failed and we were unable to recover it. 00:32:48.042 [2024-11-20 14:51:59.754358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.042 [2024-11-20 14:51:59.754397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.754541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.754574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.754703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.754736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.754943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.754992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.755273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.755307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.755497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.755529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.755745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.755778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.756077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.756113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.756298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.756338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.756518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.756550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.756753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.756785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.756980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.757014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.757231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.757263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.757402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.757437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.757635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.757668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.757791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.757823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.758095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.758130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.758255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.758288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.758514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.758552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.758686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.758718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.758930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.758975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.759253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.759285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.759492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.759523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.759660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.759697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.759888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.759921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.760152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.760187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.760444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.760478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.760677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.760730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.760847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.760879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.761139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.761174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.761467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.761500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.761618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.761651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.761883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.761931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.762269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.762302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.762500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.762533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.762664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.762697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.762899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.762939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.763099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.763132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.043 [2024-11-20 14:51:59.763345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.043 [2024-11-20 14:51:59.763378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.043 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.763521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.763554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.763684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.763715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.763911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.763944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.764111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.764146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.764285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.764318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.764513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.764544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.764683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.764714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.764822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.764853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.765049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.765086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.765349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.765385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.765561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.765595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.765723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.765754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.766009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.766043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.766239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.766277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.766401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.766433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.766625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.766657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.766842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.766875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.767062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.767095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.767269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.767311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.767518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.767553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.767726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.767758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.767935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.767991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.768102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.768134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.768400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.768431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.768619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.768654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.768831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.768862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.769053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.769087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.769205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.769238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.769427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.769460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.769586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.769634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.769826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.769858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.770070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.770103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.770243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.770275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.044 qpair failed and we were unable to recover it. 00:32:48.044 [2024-11-20 14:51:59.770394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.044 [2024-11-20 14:51:59.770426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.770552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.770584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.770766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.770800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.771072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.771106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.771323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.771355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.771547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.771579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.771849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.771886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.772087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.772122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.772301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.772332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.772538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.772571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.772742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.772775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.772887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.772918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.773112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.773145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.773292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.773324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.773588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.773620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.773739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.773771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.773899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.773931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.774071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.774112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.774366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.774401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.774589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.774622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.774760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.774792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.774934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.774977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.775174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.775208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.775383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.775422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.775614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.775647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.775764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.775796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.775995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.776030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.776156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.776195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.776316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.776347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.776548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.776579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.776703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.776736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.776915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.776959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.777139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.777169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.777368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.777403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.777599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.777631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.777899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.777931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.778142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.778175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.778426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.778501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.045 [2024-11-20 14:51:59.778729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.045 [2024-11-20 14:51:59.778766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.045 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.778978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.779015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.779215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.779248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.779422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.779457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.779652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.779685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.779872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.779904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.780100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.780134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.780351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.780383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.780502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.780535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.780724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.780756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.780959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.780992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.781100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.781131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.781305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.781347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.781545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.781579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.781875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.781909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.782185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.782219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.782396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.782427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.782559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.782591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.782769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.782801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.783000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.783034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.783166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.783198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.783391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.783442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.783651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.783682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.783871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.783903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.784103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.784136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.784307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.784340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.784627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.784658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.784795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.784827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.785086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.785120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.785365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.785397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.785528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.785559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.785741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.785774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.785969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.786002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.786177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.786208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.786479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.786512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.786773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.786805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.787050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.787083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.787253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.787286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.046 [2024-11-20 14:51:59.787412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.046 [2024-11-20 14:51:59.787443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.046 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.787580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.787613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.787910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.787943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.788147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.788179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.788356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.788387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.788652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.788687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.788934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.788976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.789102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.789135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.789328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.789361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.789544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.789577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.789811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.789843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.790016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.790050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.790316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.790349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.790543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.790574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.790713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.790752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.790935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.790977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.791100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.791134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.791379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.791412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.791530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.791564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.791752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.791785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.791990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.792023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.792205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.792238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.792356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.792388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.792512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.792545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.792733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.792765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.793025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.793058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.793309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.793341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.793526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.793558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.793749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.793781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.793913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.793945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.794167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.794199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.794375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.794407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.794650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.794683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.794861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.794893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.795156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.795189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.795303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.795334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.795470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.795503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.795629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.795661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.047 [2024-11-20 14:51:59.795788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.047 [2024-11-20 14:51:59.795820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.047 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.796064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.796098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.796293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.796326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.796519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.796553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.796748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.796780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.796907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.796939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.797246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.797279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.797388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.797419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.797550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.797583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.797693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.797726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.797847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.797879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.798056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.798089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.798285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.798318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.798437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.798469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.798588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.798620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.798793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.798826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.799001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.799039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.799227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.799258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.799462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.799494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.799764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.799796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.799990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.800023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.800275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.800307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.800568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.800601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.800839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.800872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.801137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.801171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.801392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.801423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.801666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.801699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.801877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.801909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.802091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.802125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.802302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.802335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.802588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.802620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.802840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.802872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.803085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.803118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.803368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.803400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.803536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.803567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.803771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.803803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.803999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.804035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.804293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.804325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.048 [2024-11-20 14:51:59.804515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.048 [2024-11-20 14:51:59.804547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.048 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.804782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.804814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.805033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.805066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.805190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.805221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.805418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.805451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.805653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.805701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.805891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.805925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.806132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.806166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.806358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.806390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.806601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.806632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.806742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.806776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.807027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.807061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.807252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.807284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.807397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.807430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.807717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.807750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.807920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.807962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.808082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.808114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.808287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.808317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.808558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.808590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.808729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.808761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.808892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.808925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.809042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.809076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.809261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.809292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.809483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.809514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.809699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.809732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.809978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.810013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.810257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.810289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.810477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.810510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.810631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.810664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.810980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.811014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.811218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.811251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.811516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.811549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.811689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.811728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.811920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.811967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.049 [2024-11-20 14:51:59.812084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.049 [2024-11-20 14:51:59.812119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.049 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.812248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.812279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.812535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.812568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.812756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.812788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.812930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.812971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.813150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.813183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.813398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.813431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.813690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.813723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.813913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.813945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.814151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.814183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.814358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.814390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.814650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.814683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.814875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.814908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.815048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.815081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.815286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.815318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.815602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.815635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.815909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.815940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.816159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.816192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.816389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.816421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.816656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.816688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.816967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.817000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.817190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.817223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.817483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.817515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.817627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.817660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.817919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.817962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.818227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.818259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.818459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.818491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.818669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.818703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.818878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.818909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.819109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.819142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.819356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.819388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.819600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.819631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.819787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.819818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.820025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.820059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.820235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.820267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.820454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.820486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.820615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.820648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.820883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.820915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.050 [2024-11-20 14:51:59.821111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.050 [2024-11-20 14:51:59.821145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.050 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.821327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.821360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.821500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.821531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.821662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.821693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.821889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.821923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.822115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.822148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.822322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.822355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.822492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.822524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.822703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.822735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.822972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.823004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.823138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.823171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.823416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.823449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.823558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.823590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.823895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.823928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.824145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.824161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:48.051 [2024-11-20 14:51:59.824177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.824371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.824404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.824548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.824579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.824841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.824873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.825132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.825166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.825277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.825309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.825429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.825460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.825669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.825701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.825821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.825854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.825980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.826013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.826226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.826258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.826463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.826496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.826694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.826726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.826986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.827018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.827287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.827320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.827444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.827475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.827658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.827690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.827888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.827922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.828119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.828152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.828333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.828365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.828504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.828536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.828713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.828745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.828990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.829023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.829200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.829233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.051 qpair failed and we were unable to recover it. 00:32:48.051 [2024-11-20 14:51:59.829416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.051 [2024-11-20 14:51:59.829448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.829711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.829743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.829852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.829884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.830090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.830124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.830307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.830340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.830525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.830558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.830747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.830778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.831026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.831059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.831327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.831360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.831482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.831514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.831633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.831667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.831913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.831945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.832148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.832181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.832372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.832405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.832516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.832549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.832847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.832878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.833075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.833108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.833356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.833395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.833573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.833605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.833782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.833816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.834017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.834049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.834247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.834280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.834509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.834541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.834722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.834754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.834966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.835000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.835138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.835171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.835418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.835454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.835629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.835662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.835925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.835980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.836178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.836211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.836471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.836504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.836753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.836786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.836968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.837001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.837270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.837301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.837509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.837541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.837814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.837846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.838029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.838062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.838191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.052 [2024-11-20 14:51:59.838223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.052 qpair failed and we were unable to recover it. 00:32:48.052 [2024-11-20 14:51:59.838353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.838385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.838572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.838604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.838839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.838871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.839083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.839116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.839306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.839337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.839549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.839581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.839834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.839866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.839999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.840033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.840166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.840199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.840417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.840449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.840585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.840616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.840750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.840781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.840991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.841024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.841234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.841267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.841438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.841469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.841655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.841688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.841877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.841909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.842091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.842126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.842314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.842346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.842530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.842562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.842821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.842896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.843197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.843236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.843437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.843470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.843655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.843687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.843867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.843899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.844147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.844179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.844383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.844416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.844561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.844593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.844833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.844863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.845133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.845167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.845345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.845377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.845558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.845588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.845830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.845862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.053 [2024-11-20 14:51:59.846106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.053 [2024-11-20 14:51:59.846150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.053 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.846390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.846421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.846538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.846569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.846770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.846804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.847043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.847076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.847246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.847278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.847491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.847523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.847758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.847789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.847978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.848010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.848205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.848237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.848365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.848396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.848660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.848692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.848943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.848987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.849118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.849150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.849370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.849402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.849629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.849662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.849777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.849808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.850072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.850105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.850238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.850269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.850401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.850432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.850669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.850702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.850825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.850857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.851067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.851099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.851346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.851378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.851567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.851598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.851848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.851880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.852118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.852151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.852351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.852383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.852639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.852670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.852852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.852883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.853083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.853117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.853384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.853416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.853603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.853635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.853764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.853795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.854019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.854054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.854238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.854269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.854413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.854444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.854704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.854736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.054 qpair failed and we were unable to recover it. 00:32:48.054 [2024-11-20 14:51:59.854927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.054 [2024-11-20 14:51:59.854966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.855201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.855233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.855369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.855413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.855607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.855638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.855840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.855872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.855994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.856026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.856141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.856172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.856333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.856365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.856553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.856584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.856761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.856792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.856978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.857012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.857217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.857249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.857363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.857396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.857584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.857616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.857821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.857854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.858099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.858132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.858275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.858308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.858442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.858474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.858599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.858631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.858821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.858853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.859050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.859081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.859214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.859247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.859439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.859471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.859739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.859771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.859972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.860005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.860137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.860172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.860356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.860387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.860502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.860534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.860717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.860749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.860959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.861009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.861236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.861271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.861467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.861499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.861681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.861713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.861839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.861872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.862119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.862153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.862328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.862361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.862533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.862565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.862756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.055 [2024-11-20 14:51:59.862788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.055 qpair failed and we were unable to recover it. 00:32:48.055 [2024-11-20 14:51:59.862992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.863025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.863202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.863236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.863362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.863395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.863609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.863642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.863847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.863879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.864142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.864177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.864361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.864394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.864637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.864669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.864929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.864973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.865215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.865248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.865486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.865520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.865699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.865733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.865972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.866007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.866232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:48.056 [2024-11-20 14:51:59.866261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:48.056 [2024-11-20 14:51:59.866256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.866270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:48.056 [2024-11-20 14:51:59.866278] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:48.056 [2024-11-20 14:51:59.866284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:48.056 [2024-11-20 14:51:59.866288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.866465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.866498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.866709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.866741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.866983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.867018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.867207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.867239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.867422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.867453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.867711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.867743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.867873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.867906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.867883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:48.056 [2024-11-20 14:51:59.868058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.867990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:48.056 [2024-11-20 14:51:59.868091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 [2024-11-20 14:51:59.868098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.868099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:48.056 [2024-11-20 14:51:59.868309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.868342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.868605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.868638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.868822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.868855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.869127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.869164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.869347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.869380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.869618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.869651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.869876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.869910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.870104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.870138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.870431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.870466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.870660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.870693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.870890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.870922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.871146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.871179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.871456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.056 [2024-11-20 14:51:59.871489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.056 qpair failed and we were unable to recover it. 00:32:48.056 [2024-11-20 14:51:59.871616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.871650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.871892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.871926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.872066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.872101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.872321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.872356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.872551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.872583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.872771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.872805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.872985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.873027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.873284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.873317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.873447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.873480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.873721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.873754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.874011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.874044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.874183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.874216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.874352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.874384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.874569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.874601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.874845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.874877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.874999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.875032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.875229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.875261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.875469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.875502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.875617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.875649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.875850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.875883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.876028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.876063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.876187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.876219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.876344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.876376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.876502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.876534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.876656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.876688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.876816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.876848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.877042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.877077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.877204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.877237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.877476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.877508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.877631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.877664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.877835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.877867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.878052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.878086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.878337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.878371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.878508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.878553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.878761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.878795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.878992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.879025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.879196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.879230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.879388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.057 [2024-11-20 14:51:59.879420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.057 qpair failed and we were unable to recover it. 00:32:48.057 [2024-11-20 14:51:59.879732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.879765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.880084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.880115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.880313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.880345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.880475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.880507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.880778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.880812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.880986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.881019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.881211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.881245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.881363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.881396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.881659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.881699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.881969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.882003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.882191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.882223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.882351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.882385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.882526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.882560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.882730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.882761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.882964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.882997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.883192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.883224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.883517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.883549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.883736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.883767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.884013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.884047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.884180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.884213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.884435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.884466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.884711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.884745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.885004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.885059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.885339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.885378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.885578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.885611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.885872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.885905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.886105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.886140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.886398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.886433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.886717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.886751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.887020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.887054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.887247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.887281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.887519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.887552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.887787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.887819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.058 qpair failed and we were unable to recover it. 00:32:48.058 [2024-11-20 14:51:59.887926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.058 [2024-11-20 14:51:59.887966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.888182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.888215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.888533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.888595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.888853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.888887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.889150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.889183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.889448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.889480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.889745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.889779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.890021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.890055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.890278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.890310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.890581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.890614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.890811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.890842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.891071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.891104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.891345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.891378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.891635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.891667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.891918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.891957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.892203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.892235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.892433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.892467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.892743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.892777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.892975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.893010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.893158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.893191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.893389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.893420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.893624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.893656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.893789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.893822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.894087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.894122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.894396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.894429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.894715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.894748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.894968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.895000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.895243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.895275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.895476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.895509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.895682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.895720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.895967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.896001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.896264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.896296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.896507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.896539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.896829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.896862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.897130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.897163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.897435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.897468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.897753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.897786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.898044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.898079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.898214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.059 [2024-11-20 14:51:59.898246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.059 qpair failed and we were unable to recover it. 00:32:48.059 [2024-11-20 14:51:59.898423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.898456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.898663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.898696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.898961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.898996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.899186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.899220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.899407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.899440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.899568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.899600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.899787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.899820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.900006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.900039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.900207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.900240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.900420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.900455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.900742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.900775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.900966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.901000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.901196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.901230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.901437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.901471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.901675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.901707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.901903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.901937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.902213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.902248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.902517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.902558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.902807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.902841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.903086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.903120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.903327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.903359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.903623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.903658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.903847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.903880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.904143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.904177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.904465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.904498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.904701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.904733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.904918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.904956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.905228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.905260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.905464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.905495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.905756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.905787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.906040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.906073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.906301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.906352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.906545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.906576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.906846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.906878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.907057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.907090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.907329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.907360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.907568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.907600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.907865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.060 [2024-11-20 14:51:59.907896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.060 qpair failed and we were unable to recover it. 00:32:48.060 [2024-11-20 14:51:59.908146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.908178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.908432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.908463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.908645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.908678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.908941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.908981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.909174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.909207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.909448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.909481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.909768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.909807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.910070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.910104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.910342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.910375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.910662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.910694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.910968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.911002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.911127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.911158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.911417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.911451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.911640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.911671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.911838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.911870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.912087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.912120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.912383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.912416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.912679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.912711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.912992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.913025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.913235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.913268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.913486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.913519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.913806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.913844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.914108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.914145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.914266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.914298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.914492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.914526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.914750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.914787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.915047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.915081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.915355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.915388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.915668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.915701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.915983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.916018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.916296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.916328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.916588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.916619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.916805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.916837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.917135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.917194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.917477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.917510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.917776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.917809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.918098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.918134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.918333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.918366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.061 qpair failed and we were unable to recover it. 00:32:48.061 [2024-11-20 14:51:59.918657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.061 [2024-11-20 14:51:59.918690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.918880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.918911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.919113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.919148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.919409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.919441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.919731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.919763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.920036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.920070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.920311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.920343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.920517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.920549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.920841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.920882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.921135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.921170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.921360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.921392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.921663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.921697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.921870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.921904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.922122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.922156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.922425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.922459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.922601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.922635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.922755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.922787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.922969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.923005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.923291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.923331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.923624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.923660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.923863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.923900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.924179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.924214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.924421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.924454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.924745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.924778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.924982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.925015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.925223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.925255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.925498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.925532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.925776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.925809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.926000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.926035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.926277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.926309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.926550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.926582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.926764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.926797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.927096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.927130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.927384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.927416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.927675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.927707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.927841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.927874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.062 [2024-11-20 14:51:59.928063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.062 [2024-11-20 14:51:59.928096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.062 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.928279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.928311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.928490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.928522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.928726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.928757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.928960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.928992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.929183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.929216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.929456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.929487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.929718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.929750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.930007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.930041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.930340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.930373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.930634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.930666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.930987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.931021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.931234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.931272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.931518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.931549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.931745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.931778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.931960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.931993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.932255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.932287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.932557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.932589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.932831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.932863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.933116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.933149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.933345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.933377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.933620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.933651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.933820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.933852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.934126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.934159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.934287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.934319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.934448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.934479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.934733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.934766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.934967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.935002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.935126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.935158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.935402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.935435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.935617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.935649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.935831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.935863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.936115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.936148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.936336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.936368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.936609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.063 [2024-11-20 14:51:59.936641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.063 qpair failed and we were unable to recover it. 00:32:48.063 [2024-11-20 14:51:59.936817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.936850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.937037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.937069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.937343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.937375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.937659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.937691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac30000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.937936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.938001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.938289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.938328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.938578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.938610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.938771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.938802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.939022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.939055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.939258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.939289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.939473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.939503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.939711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.939743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.939966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.940000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.940239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.940269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.940505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.940536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.940711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.940743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.940915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.940945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.941154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.941184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.941405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.941437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.941624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.941655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.941832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.941863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.942071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.942105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.942365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.942396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.942517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.942548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.942732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.942764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.942871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.942901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.943127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.943160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.943441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.943472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.943751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.943781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.944064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.944096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.944295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.944328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.944581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.944613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.944817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.944848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.944970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.945002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.945135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.945168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.945408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.945438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.945640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.945671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.945803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.064 [2024-11-20 14:51:59.945835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.064 qpair failed and we were unable to recover it. 00:32:48.064 [2024-11-20 14:51:59.946034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.946066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.946332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.946364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.946561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.946591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.946850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.946881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.947170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.947202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.947392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.947424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.947663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.947700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.947876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.947907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.948205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.948238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.948421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.948453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.948718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.948748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.948934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.948983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.949192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.949223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.949483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.949516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.949709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.949739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.949916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.949958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.950202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.950234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.950522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.950554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.950740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.950770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.951035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.951067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.951352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.951383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.951680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.951710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.951922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.951960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.952149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.952179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.952441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.952472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.952685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.952716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.952965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.952996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.953199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.953233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.953429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.953462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.953591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.953621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.953874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.953904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.954202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.954235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.954493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.954523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.954808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.954839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.955114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.955147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.955431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.955462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.955739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.955768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.065 [2024-11-20 14:51:59.956050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.065 [2024-11-20 14:51:59.956082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.065 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.956266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.956296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.956533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.956564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.956786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.956817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.956966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.957005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.957309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.957339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.957496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.957527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.957792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.957823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.958012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.958044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.958171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.958208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.958472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.958502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.958648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.958678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.958918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.958959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.959224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.959256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.959435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.959465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.959637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.959668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.959809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.959841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.960086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.960118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.960387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.960417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.960705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.960735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.066 [2024-11-20 14:51:59.961017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.066 [2024-11-20 14:51:59.961048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.066 qpair failed and we were unable to recover it. 00:32:48.337 [2024-11-20 14:51:59.961239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.337 [2024-11-20 14:51:59.961270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.337 qpair failed and we were unable to recover it. 00:32:48.337 [2024-11-20 14:51:59.961471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.337 [2024-11-20 14:51:59.961501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.337 qpair failed and we were unable to recover it. 00:32:48.337 [2024-11-20 14:51:59.961769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.337 [2024-11-20 14:51:59.961799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.337 qpair failed and we were unable to recover it. 00:32:48.337 [2024-11-20 14:51:59.962049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.337 [2024-11-20 14:51:59.962082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.337 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.962283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.962314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.962503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.962533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.962726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.962757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.963030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.963061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.963234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.963264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.963482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.963514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.963695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.963726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.963966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.963998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.964118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.964149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.964427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.964457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.964733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.964764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.964985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.965019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.965205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.965237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.965502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.965532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.965725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.965756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.965965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.965998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.966235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.966265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.966527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.966558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.966800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.966830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.967071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.967103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.967294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.967326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.967587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.967617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.967857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.967888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.968156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.968187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.968519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.968557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.968819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.968848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.969067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.969100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.969359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.969391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.969627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.969657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.969896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.969928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.970193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.970223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.970468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.970498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.970739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.970770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.970962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.970995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.971178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.971208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.971447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.971477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.338 [2024-11-20 14:51:59.971783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.338 [2024-11-20 14:51:59.971814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.338 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.972012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.972044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.972318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.972350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.972604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.972635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 14:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:48.339 [2024-11-20 14:51:59.972874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.972914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.973129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.973162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 14:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:32:48.339 [2024-11-20 14:51:59.973332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.973364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.973552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.973583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 14:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:48.339 [2024-11-20 14:51:59.973856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.973887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 14:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:48.339 [2024-11-20 14:51:59.974080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.974112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 14:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.339 [2024-11-20 14:51:59.974243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.974279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.974479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.974510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.974703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.974734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.975011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.975045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.975234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.975265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.975466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.975496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.975752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.975782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.975977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.976009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.976140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.976171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.976345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.976376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.976569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.976601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.976790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.976820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.977031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.977066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.977253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.977285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.977527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.977557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.977746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.977779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.977966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.978005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.978197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.978227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.978403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.978432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.978618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.978650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.978783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.978814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.979023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.979055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.979180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.979213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.979412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.979441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.979614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.979644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.979853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.339 [2024-11-20 14:51:59.979886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.339 qpair failed and we were unable to recover it. 00:32:48.339 [2024-11-20 14:51:59.980001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.980032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.980259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.980290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.980506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.980538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.980658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.980690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.980981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.981013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.981140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.981172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.981366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.981397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.981528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.981558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.981734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.981765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.981933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.981977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.982161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.982193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.982373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.982403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.982526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.982556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.982685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.982715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.982991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.983025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.983150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.983183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.983284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.983316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.983516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.983548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.983720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.983751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.983965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.983997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.984128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.984160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.984287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.984318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.984491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.984524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.984722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.984754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.984983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.985018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.985203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.985233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.985340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.985373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.985572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.985604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.985816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.985847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.985985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.986024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.986148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.986185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.986293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.986324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.986564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.986594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.986718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.986748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.986919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.986963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.987149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.987181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.987299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.340 [2024-11-20 14:51:59.987330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.340 qpair failed and we were unable to recover it. 00:32:48.340 [2024-11-20 14:51:59.987467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.987497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.987689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.987721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.987895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.987925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.988035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.988066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.988191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.988222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.988341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.988371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.988484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.988514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.988647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.988680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.988811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.988841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.989019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.989052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.989232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.989263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.989460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.989490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.989685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.989716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.989966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.989999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.990206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.990236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.990407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.990439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.990656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.990688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.990892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.990924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.991171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.991203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.991322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.991354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.991495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.991527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.991707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.991738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.992020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.992053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.992231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.992263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.992502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.992533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.992722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.992754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.992994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.993028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.993171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.993204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.993390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.993422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.993719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.993751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.993964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.993996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.994126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.994158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.994425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.994456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.994691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.994729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.994998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.995031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.995163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.995194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.995431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.995462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.995665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.341 [2024-11-20 14:51:59.995698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.341 qpair failed and we were unable to recover it. 00:32:48.341 [2024-11-20 14:51:59.995849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:51:59.995879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:51:59.996137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:51:59.996169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:51:59.996365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:51:59.996398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:51:59.996658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:51:59.996689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:51:59.996905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:51:59.996935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:51:59.997104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:51:59.997138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:51:59.997327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:51:59.997357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:51:59.997540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:51:59.997573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:51:59.997835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:51:59.997867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:51:59.998079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:51:59.998112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:51:59.998239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:51:59.998271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:51:59.998512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:51:59.998542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:51:59.998730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:51:59.998762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:51:59.999053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:51:59.999085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:51:59.999277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:51:59.999310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:51:59.999451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:51:59.999481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:51:59.999721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:51:59.999753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:51:59.999943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:51:59.999983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:52:00.000126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:52:00.000158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:52:00.000419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:52:00.000450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:52:00.000753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:52:00.000785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:52:00.000995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:52:00.001027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:52:00.001281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:52:00.001313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:52:00.001509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:52:00.001542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:52:00.001764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:52:00.001795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:52:00.002038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:52:00.002071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:52:00.002225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:52:00.002256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:52:00.002390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:52:00.002422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:52:00.002595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:52:00.002627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:52:00.002832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:52:00.002863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:52:00.003097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.342 [2024-11-20 14:52:00.003129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.342 qpair failed and we were unable to recover it. 00:32:48.342 [2024-11-20 14:52:00.003317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.003348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.003486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.003518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.003789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.003821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.004099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.004131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.004392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.004431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.004643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.004676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.004890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.004923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.005089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.005121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.005309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.005341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.005474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.005505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.005706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.005738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.005920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.005962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.006212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.006243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.006436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.006468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.006757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.006789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.007052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.007083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.007271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.007304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.007480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.007512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.007785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.007817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.007998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.008030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.008268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.008300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.008447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.008479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.008672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.008703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:48.343 [2024-11-20 14:52:00.008895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.008929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.009082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.009114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.343 [2024-11-20 14:52:00.009306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.009339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.343 [2024-11-20 14:52:00.009630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.009662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.009845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.009876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.010055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.010088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.010284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.010316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.010531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.010563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.010760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.010791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.010980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.011012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.011224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.011255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.011446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.011478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.011792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.011824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.343 qpair failed and we were unable to recover it. 00:32:48.343 [2024-11-20 14:52:00.011970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.343 [2024-11-20 14:52:00.012002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.012191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.012223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.012356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.012387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.012599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.012629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.012896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.012929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.013087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.013119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.013231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.013268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.013412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.013444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.013624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.013655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.013882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.013914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.014101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.014133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.014275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.014310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.014443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.014475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.014754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.014787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.014995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.015028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.015163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.015194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.015387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.015417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.015691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.015723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.015852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.015884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.016201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.016234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.016372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.016404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.016688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.016720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.017019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.017053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.017268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.017299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.017492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.017524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.017659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.017691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.017906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.017937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.018169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.018203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.018396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.018427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.018659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.018691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.018874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.018905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.019065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.019099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.019233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.019263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.019409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.019440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.019582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.019612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.019839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.019873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.020046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.020079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.020211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.020243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.344 [2024-11-20 14:52:00.020385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.344 [2024-11-20 14:52:00.020416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.344 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.020549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.020579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.020708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.020738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.020849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.020881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.021020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.021051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.021180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.021212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.021402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.021433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.021551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.021582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.021713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.021750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.021932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.021999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.022144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.022174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.022358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.022388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.022501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.022533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.022659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.022690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.022810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.022841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.023043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.023076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.023201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.023232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.023492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.023523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.023812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.023845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.024074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.024106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.024242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.024274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.024537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.024568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.024811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.024843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.025029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.025060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.025299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.025330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.025546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.025577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.025752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.025782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.025971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.026003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.026214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.026246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.026486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.026517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.026627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.026659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.026865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.026896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.027143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.027175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.027366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.027396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.027681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.027712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.028011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.028044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.028238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.028269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.345 [2024-11-20 14:52:00.028474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.345 [2024-11-20 14:52:00.028506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.345 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.028769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.028800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.029049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.029080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.029220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.029251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.029509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.029540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.029717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.029748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.029879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.029910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.030119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.030151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.030407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.030439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.030726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.030757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.031044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.031076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.031264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.031301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.031479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.031509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.031629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.031660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.031910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.031942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.032149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.032182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.032437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.032469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.032697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.032729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.032991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.033025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.033216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.033247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.033450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.033481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.033667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.033699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.033896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.033927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.034204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.034238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.034517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.034549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.034828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.034862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.035109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.035142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.035396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.035428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.035716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.035748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.035889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.035922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.036126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.036158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.036294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.036326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.036500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.036534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.036784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.036817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.037105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.037139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.037380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.037414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.037611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.037642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.346 qpair failed and we were unable to recover it. 00:32:48.346 [2024-11-20 14:52:00.037923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.346 [2024-11-20 14:52:00.037963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.038239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.038272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.038537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.038569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.038883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.038915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.039217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.039251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.039403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.039433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.039617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.039649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.039891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.039923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.040154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.040187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.040433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.040465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.040728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.040760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.041054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.041087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.041281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.041313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.041526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.041558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.041797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.041835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.042099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.042131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.042318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.042349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.042531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.042563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.042683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.042714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.042901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.042933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 Malloc0 00:32:48.347 [2024-11-20 14:52:00.043190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.043222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.043479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.043511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.043734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.043766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.347 [2024-11-20 14:52:00.044004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.044037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:48.347 [2024-11-20 14:52:00.044322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.044353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.044537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.044568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.347 [2024-11-20 14:52:00.044756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.044796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.347 [2024-11-20 14:52:00.045006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.045040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.045235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.045268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.045519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.045550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.045840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.347 [2024-11-20 14:52:00.045872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.347 qpair failed and we were unable to recover it. 00:32:48.347 [2024-11-20 14:52:00.046009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.046041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.046276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.046307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.046544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.046575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.046794] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:48.348 [2024-11-20 14:52:00.046819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.046850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.047110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.047142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.047425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.047456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.047595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.047626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.047885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.047918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.048182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.048213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.048350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.048383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.048646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.048676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.048864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.048896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.049180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.049212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.049454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.049484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.049723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.049754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.049861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.049893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.050185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.050217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.050407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.050438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.050701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.050732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.050921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.050962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.051234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.051265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.051436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.051474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.051739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.051769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.052057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.052091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.052309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.052340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.052570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.052602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.052863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.052894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.053048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.053081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.053318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.053350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.053633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.053664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.053848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.053880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.054065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.054098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.054360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.054390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.054589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.054621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.054748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.054778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.054943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.054985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.055142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.348 [2024-11-20 14:52:00.055180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.348 qpair failed and we were unable to recover it. 00:32:48.348 [2024-11-20 14:52:00.055396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.055427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.349 [2024-11-20 14:52:00.055612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.055643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.055831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:48.349 [2024-11-20 14:52:00.055863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.056010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.056042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.349 [2024-11-20 14:52:00.056182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.056215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.056409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.056440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.349 [2024-11-20 14:52:00.056566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.056598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fac28000b90 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.056747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.056780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.056931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.056944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.057042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.057053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.057192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.057204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.057292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.057302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.057465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.057477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.057626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.057638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.057805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.057817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.058016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.058028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.058250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.058262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.058546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.058558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.058778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.058789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.059007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.059020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.059168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.059179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.059350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.059362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.059552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.059563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.059712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.059724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.059887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.059898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.060084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.060097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.060240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.060252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.060496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.060508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.060723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.060734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.060827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.060838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.061006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.061018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.061213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.061225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.061314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.061324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.061480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.061492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.061636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.061648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.349 [2024-11-20 14:52:00.061781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.349 [2024-11-20 14:52:00.061793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.349 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.061986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.062001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.062215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.062228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.062452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.062464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.062548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.062560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.062786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.062799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.062938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.062956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.063189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.063204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.350 [2024-11-20 14:52:00.063421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.063435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.063653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.063666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:48.350 [2024-11-20 14:52:00.063809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.063821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.063975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.063988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.350 [2024-11-20 14:52:00.064136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.064149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.350 [2024-11-20 14:52:00.064359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.064373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.064453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.064465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.064608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.064621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.064844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.064858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.064938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.064955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.065055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.065067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.065341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.065354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.065504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.065517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.065669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.065682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.065910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.065922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.066082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.066096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.066228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.066241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.066462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.066475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.066636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.066649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.066847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.066860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.066999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.067012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.067240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.067253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.067481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.067494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.067689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.067702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.067920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.067933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.068084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.068096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.068263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.068276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.068431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.068444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.068641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.068661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.350 [2024-11-20 14:52:00.068853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.350 [2024-11-20 14:52:00.068866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.350 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.069018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.069032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.069163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.069175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.069395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.069410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.069662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.069676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.069761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.069773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.069994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.070008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.070204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.070217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.070361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.070374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.070605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.070618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.070765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.070778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.070980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.070994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.071140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.071153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.071259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.071273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.351 [2024-11-20 14:52:00.071422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.071435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.071530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.071543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:48.351 [2024-11-20 14:52:00.071789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.071803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.071971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.071984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.351 [2024-11-20 14:52:00.072087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.072103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.072268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.072285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.351 [2024-11-20 14:52:00.072542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.072559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.072698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.072715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.072950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.072968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.073152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.073168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.073337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.073354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.073556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.073572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.073775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.073791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.073969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.073986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.074216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.074233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.074400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.074417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.074598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.351 [2024-11-20 14:52:00.074614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.351 qpair failed and we were unable to recover it. 00:32:48.351 [2024-11-20 14:52:00.074821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.352 [2024-11-20 14:52:00.074838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59eba0 with addr=10.0.0.2, port=4420 00:32:48.352 qpair failed and we were unable to recover it. 00:32:48.352 [2024-11-20 14:52:00.075028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:48.352 [2024-11-20 14:52:00.077488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.352 [2024-11-20 14:52:00.077575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.352 [2024-11-20 14:52:00.077602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.352 [2024-11-20 14:52:00.077615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.352 [2024-11-20 14:52:00.077625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.352 [2024-11-20 14:52:00.077652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.352 qpair failed and we were unable to recover it. 00:32:48.352 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.352 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:48.352 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.352 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:48.352 [2024-11-20 14:52:00.087424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.352 [2024-11-20 14:52:00.087527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.352 [2024-11-20 14:52:00.087549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.352 [2024-11-20 14:52:00.087560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.352 [2024-11-20 14:52:00.087570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.352 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.352 [2024-11-20 14:52:00.087592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.352 qpair failed and we were unable to recover it. 00:32:48.352 14:52:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1755790 00:32:48.352 [2024-11-20 14:52:00.097383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.352 [2024-11-20 14:52:00.097449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.352 [2024-11-20 14:52:00.097468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.352 [2024-11-20 14:52:00.097476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.352 [2024-11-20 14:52:00.097482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.352 [2024-11-20 14:52:00.097497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.352 qpair failed and we were unable to recover it. 00:32:48.352 [2024-11-20 14:52:00.107385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.352 [2024-11-20 14:52:00.107445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.352 [2024-11-20 14:52:00.107460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.352 [2024-11-20 14:52:00.107467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.352 [2024-11-20 14:52:00.107473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.352 [2024-11-20 14:52:00.107488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.352 qpair failed and we were unable to recover it. 00:32:48.352 [2024-11-20 14:52:00.117288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.352 [2024-11-20 14:52:00.117346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.352 [2024-11-20 14:52:00.117361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.352 [2024-11-20 14:52:00.117368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.352 [2024-11-20 14:52:00.117373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.352 [2024-11-20 14:52:00.117388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.352 qpair failed and we were unable to recover it. 00:32:48.352 [2024-11-20 14:52:00.127333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.352 [2024-11-20 14:52:00.127390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.352 [2024-11-20 14:52:00.127405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.352 [2024-11-20 14:52:00.127412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.352 [2024-11-20 14:52:00.127418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.352 [2024-11-20 14:52:00.127433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.352 qpair failed and we were unable to recover it. 00:32:48.352 [2024-11-20 14:52:00.137410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.352 [2024-11-20 14:52:00.137468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.352 [2024-11-20 14:52:00.137482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.352 [2024-11-20 14:52:00.137489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.352 [2024-11-20 14:52:00.137499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.352 [2024-11-20 14:52:00.137514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.352 qpair failed and we were unable to recover it. 00:32:48.352 [2024-11-20 14:52:00.147440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.352 [2024-11-20 14:52:00.147515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.352 [2024-11-20 14:52:00.147529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.352 [2024-11-20 14:52:00.147536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.352 [2024-11-20 14:52:00.147542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.352 [2024-11-20 14:52:00.147556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.352 qpair failed and we were unable to recover it. 00:32:48.352 [2024-11-20 14:52:00.157496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.352 [2024-11-20 14:52:00.157552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.352 [2024-11-20 14:52:00.157567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.352 [2024-11-20 14:52:00.157573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.352 [2024-11-20 14:52:00.157579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.352 [2024-11-20 14:52:00.157593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.352 qpair failed and we were unable to recover it. 00:32:48.352 [2024-11-20 14:52:00.167511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.352 [2024-11-20 14:52:00.167562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.352 [2024-11-20 14:52:00.167577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.352 [2024-11-20 14:52:00.167584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.352 [2024-11-20 14:52:00.167590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.352 [2024-11-20 14:52:00.167604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.352 qpair failed and we were unable to recover it. 00:32:48.352 [2024-11-20 14:52:00.177530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.352 [2024-11-20 14:52:00.177584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.352 [2024-11-20 14:52:00.177598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.352 [2024-11-20 14:52:00.177604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.352 [2024-11-20 14:52:00.177611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.352 [2024-11-20 14:52:00.177625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.352 qpair failed and we were unable to recover it. 00:32:48.352 [2024-11-20 14:52:00.187554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.352 [2024-11-20 14:52:00.187611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.352 [2024-11-20 14:52:00.187626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.352 [2024-11-20 14:52:00.187632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.352 [2024-11-20 14:52:00.187638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.352 [2024-11-20 14:52:00.187652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.352 qpair failed and we were unable to recover it. 00:32:48.353 [2024-11-20 14:52:00.197573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.353 [2024-11-20 14:52:00.197630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.353 [2024-11-20 14:52:00.197645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.353 [2024-11-20 14:52:00.197652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.353 [2024-11-20 14:52:00.197657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.353 [2024-11-20 14:52:00.197671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.353 qpair failed and we were unable to recover it. 00:32:48.353 [2024-11-20 14:52:00.207607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.353 [2024-11-20 14:52:00.207663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.353 [2024-11-20 14:52:00.207677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.353 [2024-11-20 14:52:00.207684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.353 [2024-11-20 14:52:00.207690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.353 [2024-11-20 14:52:00.207704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.353 qpair failed and we were unable to recover it. 00:32:48.353 [2024-11-20 14:52:00.217638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.353 [2024-11-20 14:52:00.217705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.353 [2024-11-20 14:52:00.217720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.353 [2024-11-20 14:52:00.217727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.353 [2024-11-20 14:52:00.217733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.353 [2024-11-20 14:52:00.217746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.353 qpair failed and we were unable to recover it. 00:32:48.353 [2024-11-20 14:52:00.227705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.353 [2024-11-20 14:52:00.227809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.353 [2024-11-20 14:52:00.227827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.353 [2024-11-20 14:52:00.227833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.353 [2024-11-20 14:52:00.227839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.353 [2024-11-20 14:52:00.227854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.353 qpair failed and we were unable to recover it. 00:32:48.353 [2024-11-20 14:52:00.237694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.353 [2024-11-20 14:52:00.237752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.353 [2024-11-20 14:52:00.237766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.353 [2024-11-20 14:52:00.237774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.353 [2024-11-20 14:52:00.237779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.353 [2024-11-20 14:52:00.237794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.353 qpair failed and we were unable to recover it. 00:32:48.353 [2024-11-20 14:52:00.247716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.353 [2024-11-20 14:52:00.247768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.353 [2024-11-20 14:52:00.247782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.353 [2024-11-20 14:52:00.247789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.353 [2024-11-20 14:52:00.247795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.353 [2024-11-20 14:52:00.247809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.353 qpair failed and we were unable to recover it. 00:32:48.353 [2024-11-20 14:52:00.257745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.353 [2024-11-20 14:52:00.257802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.353 [2024-11-20 14:52:00.257817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.353 [2024-11-20 14:52:00.257824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.353 [2024-11-20 14:52:00.257830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.353 [2024-11-20 14:52:00.257844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.353 qpair failed and we were unable to recover it. 00:32:48.353 [2024-11-20 14:52:00.267787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.353 [2024-11-20 14:52:00.267856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.353 [2024-11-20 14:52:00.267871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.353 [2024-11-20 14:52:00.267878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.353 [2024-11-20 14:52:00.267889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.353 [2024-11-20 14:52:00.267904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.353 qpair failed and we were unable to recover it. 00:32:48.353 [2024-11-20 14:52:00.277817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.353 [2024-11-20 14:52:00.277922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.353 [2024-11-20 14:52:00.277937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.353 [2024-11-20 14:52:00.277944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.353 [2024-11-20 14:52:00.277955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.353 [2024-11-20 14:52:00.277970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.353 qpair failed and we were unable to recover it. 00:32:48.614 [2024-11-20 14:52:00.287829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.614 [2024-11-20 14:52:00.287887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.614 [2024-11-20 14:52:00.287904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.614 [2024-11-20 14:52:00.287911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.614 [2024-11-20 14:52:00.287917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.614 [2024-11-20 14:52:00.287932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.614 qpair failed and we were unable to recover it. 00:32:48.614 [2024-11-20 14:52:00.297863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.614 [2024-11-20 14:52:00.297918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.615 [2024-11-20 14:52:00.297933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.615 [2024-11-20 14:52:00.297940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.615 [2024-11-20 14:52:00.297946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.615 [2024-11-20 14:52:00.297965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.615 qpair failed and we were unable to recover it. 00:32:48.615 [2024-11-20 14:52:00.307906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.615 [2024-11-20 14:52:00.307986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.615 [2024-11-20 14:52:00.308001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.615 [2024-11-20 14:52:00.308008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.615 [2024-11-20 14:52:00.308014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.615 [2024-11-20 14:52:00.308031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.615 qpair failed and we were unable to recover it. 00:32:48.615 [2024-11-20 14:52:00.317944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.615 [2024-11-20 14:52:00.318007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.615 [2024-11-20 14:52:00.318023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.615 [2024-11-20 14:52:00.318030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.615 [2024-11-20 14:52:00.318036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.615 [2024-11-20 14:52:00.318051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.615 qpair failed and we were unable to recover it. 00:32:48.615 [2024-11-20 14:52:00.327960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.615 [2024-11-20 14:52:00.328014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.615 [2024-11-20 14:52:00.328029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.615 [2024-11-20 14:52:00.328035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.615 [2024-11-20 14:52:00.328042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.615 [2024-11-20 14:52:00.328057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.615 qpair failed and we were unable to recover it. 00:32:48.615 [2024-11-20 14:52:00.337999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.615 [2024-11-20 14:52:00.338054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.615 [2024-11-20 14:52:00.338068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.615 [2024-11-20 14:52:00.338075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.615 [2024-11-20 14:52:00.338081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.615 [2024-11-20 14:52:00.338096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.615 qpair failed and we were unable to recover it. 00:32:48.615 [2024-11-20 14:52:00.348025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.615 [2024-11-20 14:52:00.348083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.615 [2024-11-20 14:52:00.348097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.615 [2024-11-20 14:52:00.348104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.615 [2024-11-20 14:52:00.348110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.615 [2024-11-20 14:52:00.348124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.615 qpair failed and we were unable to recover it. 00:32:48.615 [2024-11-20 14:52:00.358093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.615 [2024-11-20 14:52:00.358157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.615 [2024-11-20 14:52:00.358175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.615 [2024-11-20 14:52:00.358182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.615 [2024-11-20 14:52:00.358188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.615 [2024-11-20 14:52:00.358202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.615 qpair failed and we were unable to recover it. 00:32:48.615 [2024-11-20 14:52:00.368118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.615 [2024-11-20 14:52:00.368228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.615 [2024-11-20 14:52:00.368242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.615 [2024-11-20 14:52:00.368249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.615 [2024-11-20 14:52:00.368255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.615 [2024-11-20 14:52:00.368270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.615 qpair failed and we were unable to recover it. 00:32:48.615 [2024-11-20 14:52:00.378088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.615 [2024-11-20 14:52:00.378144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.615 [2024-11-20 14:52:00.378158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.615 [2024-11-20 14:52:00.378165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.615 [2024-11-20 14:52:00.378171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.615 [2024-11-20 14:52:00.378186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.615 qpair failed and we were unable to recover it. 00:32:48.615 [2024-11-20 14:52:00.388136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.615 [2024-11-20 14:52:00.388196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.615 [2024-11-20 14:52:00.388210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.615 [2024-11-20 14:52:00.388217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.615 [2024-11-20 14:52:00.388223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.615 [2024-11-20 14:52:00.388237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.615 qpair failed and we were unable to recover it. 00:32:48.615 [2024-11-20 14:52:00.398152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.615 [2024-11-20 14:52:00.398210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.615 [2024-11-20 14:52:00.398224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.615 [2024-11-20 14:52:00.398231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.615 [2024-11-20 14:52:00.398240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.615 [2024-11-20 14:52:00.398255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.615 qpair failed and we were unable to recover it. 00:32:48.615 [2024-11-20 14:52:00.408174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.615 [2024-11-20 14:52:00.408235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.615 [2024-11-20 14:52:00.408249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.615 [2024-11-20 14:52:00.408256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.615 [2024-11-20 14:52:00.408261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.615 [2024-11-20 14:52:00.408276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.615 qpair failed and we were unable to recover it. 00:32:48.615 [2024-11-20 14:52:00.418203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.615 [2024-11-20 14:52:00.418256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.615 [2024-11-20 14:52:00.418271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.615 [2024-11-20 14:52:00.418277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.615 [2024-11-20 14:52:00.418284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.615 [2024-11-20 14:52:00.418299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.615 qpair failed and we were unable to recover it. 00:32:48.615 [2024-11-20 14:52:00.428277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.616 [2024-11-20 14:52:00.428385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.616 [2024-11-20 14:52:00.428399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.616 [2024-11-20 14:52:00.428405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.616 [2024-11-20 14:52:00.428411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.616 [2024-11-20 14:52:00.428425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.616 qpair failed and we were unable to recover it. 00:32:48.616 [2024-11-20 14:52:00.438300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.616 [2024-11-20 14:52:00.438357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.616 [2024-11-20 14:52:00.438371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.616 [2024-11-20 14:52:00.438378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.616 [2024-11-20 14:52:00.438384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.616 [2024-11-20 14:52:00.438398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.616 qpair failed and we were unable to recover it. 00:32:48.616 [2024-11-20 14:52:00.448305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.616 [2024-11-20 14:52:00.448362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.616 [2024-11-20 14:52:00.448376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.616 [2024-11-20 14:52:00.448383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.616 [2024-11-20 14:52:00.448389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.616 [2024-11-20 14:52:00.448404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.616 qpair failed and we were unable to recover it. 00:32:48.616 [2024-11-20 14:52:00.458324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.616 [2024-11-20 14:52:00.458376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.616 [2024-11-20 14:52:00.458390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.616 [2024-11-20 14:52:00.458397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.616 [2024-11-20 14:52:00.458403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.616 [2024-11-20 14:52:00.458417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.616 qpair failed and we were unable to recover it. 00:32:48.616 [2024-11-20 14:52:00.468411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.616 [2024-11-20 14:52:00.468516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.616 [2024-11-20 14:52:00.468531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.616 [2024-11-20 14:52:00.468539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.616 [2024-11-20 14:52:00.468544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.616 [2024-11-20 14:52:00.468559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.616 qpair failed and we were unable to recover it. 00:32:48.616 [2024-11-20 14:52:00.478389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.616 [2024-11-20 14:52:00.478451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.616 [2024-11-20 14:52:00.478466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.616 [2024-11-20 14:52:00.478472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.616 [2024-11-20 14:52:00.478478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.616 [2024-11-20 14:52:00.478493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.616 qpair failed and we were unable to recover it. 00:32:48.616 [2024-11-20 14:52:00.488413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.616 [2024-11-20 14:52:00.488461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.616 [2024-11-20 14:52:00.488479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.616 [2024-11-20 14:52:00.488485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.616 [2024-11-20 14:52:00.488491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.616 [2024-11-20 14:52:00.488506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.616 qpair failed and we were unable to recover it. 00:32:48.616 [2024-11-20 14:52:00.498446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.616 [2024-11-20 14:52:00.498521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.616 [2024-11-20 14:52:00.498536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.616 [2024-11-20 14:52:00.498542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.616 [2024-11-20 14:52:00.498548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.616 [2024-11-20 14:52:00.498562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.616 qpair failed and we were unable to recover it. 00:32:48.616 [2024-11-20 14:52:00.508467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.616 [2024-11-20 14:52:00.508532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.616 [2024-11-20 14:52:00.508546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.616 [2024-11-20 14:52:00.508553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.616 [2024-11-20 14:52:00.508559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.616 [2024-11-20 14:52:00.508572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.616 qpair failed and we were unable to recover it. 00:32:48.616 [2024-11-20 14:52:00.518492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.616 [2024-11-20 14:52:00.518549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.616 [2024-11-20 14:52:00.518563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.616 [2024-11-20 14:52:00.518570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.616 [2024-11-20 14:52:00.518576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.616 [2024-11-20 14:52:00.518589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.616 qpair failed and we were unable to recover it. 00:32:48.616 [2024-11-20 14:52:00.528515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.616 [2024-11-20 14:52:00.528572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.616 [2024-11-20 14:52:00.528587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.616 [2024-11-20 14:52:00.528593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.616 [2024-11-20 14:52:00.528603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.616 [2024-11-20 14:52:00.528617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.616 qpair failed and we were unable to recover it. 00:32:48.616 [2024-11-20 14:52:00.538548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.616 [2024-11-20 14:52:00.538602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.616 [2024-11-20 14:52:00.538617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.616 [2024-11-20 14:52:00.538624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.616 [2024-11-20 14:52:00.538630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.617 [2024-11-20 14:52:00.538644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.617 qpair failed and we were unable to recover it. 00:32:48.617 [2024-11-20 14:52:00.548600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.617 [2024-11-20 14:52:00.548657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.617 [2024-11-20 14:52:00.548671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.617 [2024-11-20 14:52:00.548678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.617 [2024-11-20 14:52:00.548684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.617 [2024-11-20 14:52:00.548698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.617 qpair failed and we were unable to recover it. 00:32:48.617 [2024-11-20 14:52:00.558628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.617 [2024-11-20 14:52:00.558684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.617 [2024-11-20 14:52:00.558698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.617 [2024-11-20 14:52:00.558704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.617 [2024-11-20 14:52:00.558710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.617 [2024-11-20 14:52:00.558724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.617 qpair failed and we were unable to recover it. 00:32:48.617 [2024-11-20 14:52:00.568586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.617 [2024-11-20 14:52:00.568675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.617 [2024-11-20 14:52:00.568691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.617 [2024-11-20 14:52:00.568698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.617 [2024-11-20 14:52:00.568704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.617 [2024-11-20 14:52:00.568721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.617 qpair failed and we were unable to recover it. 00:32:48.878 [2024-11-20 14:52:00.578651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.878 [2024-11-20 14:52:00.578707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.878 [2024-11-20 14:52:00.578724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.878 [2024-11-20 14:52:00.578730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.878 [2024-11-20 14:52:00.578736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.878 [2024-11-20 14:52:00.578752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.878 qpair failed and we were unable to recover it. 00:32:48.878 [2024-11-20 14:52:00.588703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.878 [2024-11-20 14:52:00.588774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.878 [2024-11-20 14:52:00.588788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.878 [2024-11-20 14:52:00.588795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.878 [2024-11-20 14:52:00.588801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.878 [2024-11-20 14:52:00.588816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.878 qpair failed and we were unable to recover it. 00:32:48.878 [2024-11-20 14:52:00.598725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.878 [2024-11-20 14:52:00.598801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.878 [2024-11-20 14:52:00.598816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.878 [2024-11-20 14:52:00.598822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.878 [2024-11-20 14:52:00.598829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.878 [2024-11-20 14:52:00.598843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.878 qpair failed and we were unable to recover it. 00:32:48.878 [2024-11-20 14:52:00.608735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.878 [2024-11-20 14:52:00.608831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.878 [2024-11-20 14:52:00.608845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.878 [2024-11-20 14:52:00.608851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.878 [2024-11-20 14:52:00.608857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.878 [2024-11-20 14:52:00.608872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.878 qpair failed and we were unable to recover it. 00:32:48.878 [2024-11-20 14:52:00.618770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.878 [2024-11-20 14:52:00.618828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.878 [2024-11-20 14:52:00.618846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.878 [2024-11-20 14:52:00.618853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.878 [2024-11-20 14:52:00.618859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.878 [2024-11-20 14:52:00.618874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.878 qpair failed and we were unable to recover it. 00:32:48.878 [2024-11-20 14:52:00.628806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.879 [2024-11-20 14:52:00.628861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.879 [2024-11-20 14:52:00.628875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.879 [2024-11-20 14:52:00.628882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.879 [2024-11-20 14:52:00.628888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.879 [2024-11-20 14:52:00.628903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.879 qpair failed and we were unable to recover it. 00:32:48.879 [2024-11-20 14:52:00.638842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.879 [2024-11-20 14:52:00.638895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.879 [2024-11-20 14:52:00.638909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.879 [2024-11-20 14:52:00.638916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.879 [2024-11-20 14:52:00.638922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.879 [2024-11-20 14:52:00.638937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.879 qpair failed and we were unable to recover it. 00:32:48.879 [2024-11-20 14:52:00.648859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.879 [2024-11-20 14:52:00.648918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.879 [2024-11-20 14:52:00.648933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.879 [2024-11-20 14:52:00.648940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.879 [2024-11-20 14:52:00.648946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.879 [2024-11-20 14:52:00.648966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.879 qpair failed and we were unable to recover it. 00:32:48.879 [2024-11-20 14:52:00.658875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.879 [2024-11-20 14:52:00.658926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.879 [2024-11-20 14:52:00.658940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.879 [2024-11-20 14:52:00.658951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.879 [2024-11-20 14:52:00.658960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.879 [2024-11-20 14:52:00.658975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.879 qpair failed and we were unable to recover it. 00:32:48.879 [2024-11-20 14:52:00.668913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.879 [2024-11-20 14:52:00.668976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.879 [2024-11-20 14:52:00.668991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.879 [2024-11-20 14:52:00.668999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.879 [2024-11-20 14:52:00.669005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.879 [2024-11-20 14:52:00.669019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.879 qpair failed and we were unable to recover it. 00:32:48.879 [2024-11-20 14:52:00.678996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.879 [2024-11-20 14:52:00.679054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.879 [2024-11-20 14:52:00.679068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.879 [2024-11-20 14:52:00.679074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.879 [2024-11-20 14:52:00.679080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.879 [2024-11-20 14:52:00.679094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.879 qpair failed and we were unable to recover it. 00:32:48.879 [2024-11-20 14:52:00.688907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.879 [2024-11-20 14:52:00.688965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.879 [2024-11-20 14:52:00.688980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.879 [2024-11-20 14:52:00.688987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.879 [2024-11-20 14:52:00.688992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.879 [2024-11-20 14:52:00.689007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.879 qpair failed and we were unable to recover it. 00:32:48.879 [2024-11-20 14:52:00.698930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.879 [2024-11-20 14:52:00.698988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.879 [2024-11-20 14:52:00.699004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.879 [2024-11-20 14:52:00.699010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.879 [2024-11-20 14:52:00.699017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.879 [2024-11-20 14:52:00.699031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.879 qpair failed and we were unable to recover it. 00:32:48.879 [2024-11-20 14:52:00.708969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.879 [2024-11-20 14:52:00.709024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.879 [2024-11-20 14:52:00.709040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.879 [2024-11-20 14:52:00.709046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.879 [2024-11-20 14:52:00.709052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.879 [2024-11-20 14:52:00.709067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.879 qpair failed and we were unable to recover it. 00:32:48.879 [2024-11-20 14:52:00.719030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.879 [2024-11-20 14:52:00.719125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.879 [2024-11-20 14:52:00.719139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.879 [2024-11-20 14:52:00.719146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.879 [2024-11-20 14:52:00.719152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.879 [2024-11-20 14:52:00.719166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.879 qpair failed and we were unable to recover it. 00:32:48.879 [2024-11-20 14:52:00.729031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.879 [2024-11-20 14:52:00.729087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.879 [2024-11-20 14:52:00.729101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.879 [2024-11-20 14:52:00.729107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.879 [2024-11-20 14:52:00.729113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.879 [2024-11-20 14:52:00.729127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.879 qpair failed and we were unable to recover it. 00:32:48.879 [2024-11-20 14:52:00.739109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.879 [2024-11-20 14:52:00.739159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.879 [2024-11-20 14:52:00.739173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.879 [2024-11-20 14:52:00.739180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.879 [2024-11-20 14:52:00.739186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.880 [2024-11-20 14:52:00.739200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.880 qpair failed and we were unable to recover it. 00:32:48.880 [2024-11-20 14:52:00.749137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.880 [2024-11-20 14:52:00.749207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.880 [2024-11-20 14:52:00.749224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.880 [2024-11-20 14:52:00.749232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.880 [2024-11-20 14:52:00.749238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.880 [2024-11-20 14:52:00.749252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.880 qpair failed and we were unable to recover it. 00:32:48.880 [2024-11-20 14:52:00.759203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.880 [2024-11-20 14:52:00.759255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.880 [2024-11-20 14:52:00.759270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.880 [2024-11-20 14:52:00.759277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.880 [2024-11-20 14:52:00.759284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.880 [2024-11-20 14:52:00.759298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.880 qpair failed and we were unable to recover it. 00:32:48.880 [2024-11-20 14:52:00.769161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.880 [2024-11-20 14:52:00.769215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.880 [2024-11-20 14:52:00.769230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.880 [2024-11-20 14:52:00.769237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.880 [2024-11-20 14:52:00.769243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.880 [2024-11-20 14:52:00.769258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.880 qpair failed and we were unable to recover it. 00:32:48.880 [2024-11-20 14:52:00.779183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.880 [2024-11-20 14:52:00.779270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.880 [2024-11-20 14:52:00.779284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.880 [2024-11-20 14:52:00.779290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.880 [2024-11-20 14:52:00.779296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.880 [2024-11-20 14:52:00.779310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.880 qpair failed and we were unable to recover it. 00:32:48.880 [2024-11-20 14:52:00.789264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.880 [2024-11-20 14:52:00.789324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.880 [2024-11-20 14:52:00.789338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.880 [2024-11-20 14:52:00.789345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.880 [2024-11-20 14:52:00.789354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.880 [2024-11-20 14:52:00.789368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.880 qpair failed and we were unable to recover it. 00:32:48.880 [2024-11-20 14:52:00.799219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.880 [2024-11-20 14:52:00.799286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.880 [2024-11-20 14:52:00.799300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.880 [2024-11-20 14:52:00.799307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.880 [2024-11-20 14:52:00.799313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.880 [2024-11-20 14:52:00.799326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.880 qpair failed and we were unable to recover it. 00:32:48.880 [2024-11-20 14:52:00.809335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.880 [2024-11-20 14:52:00.809391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.880 [2024-11-20 14:52:00.809405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.880 [2024-11-20 14:52:00.809412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.880 [2024-11-20 14:52:00.809418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.880 [2024-11-20 14:52:00.809432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.880 qpair failed and we were unable to recover it. 00:32:48.880 [2024-11-20 14:52:00.819366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.880 [2024-11-20 14:52:00.819418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.880 [2024-11-20 14:52:00.819433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.880 [2024-11-20 14:52:00.819439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.880 [2024-11-20 14:52:00.819445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.880 [2024-11-20 14:52:00.819460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.880 qpair failed and we were unable to recover it. 00:32:48.880 [2024-11-20 14:52:00.829415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:48.880 [2024-11-20 14:52:00.829473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:48.880 [2024-11-20 14:52:00.829488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:48.880 [2024-11-20 14:52:00.829495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:48.880 [2024-11-20 14:52:00.829501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:48.880 [2024-11-20 14:52:00.829516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:48.880 qpair failed and we were unable to recover it. 00:32:49.141 [2024-11-20 14:52:00.839431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.141 [2024-11-20 14:52:00.839483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.141 [2024-11-20 14:52:00.839499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.141 [2024-11-20 14:52:00.839506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.141 [2024-11-20 14:52:00.839512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.141 [2024-11-20 14:52:00.839527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.141 qpair failed and we were unable to recover it. 00:32:49.141 [2024-11-20 14:52:00.849363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.141 [2024-11-20 14:52:00.849414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.141 [2024-11-20 14:52:00.849431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.141 [2024-11-20 14:52:00.849439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.141 [2024-11-20 14:52:00.849445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.141 [2024-11-20 14:52:00.849462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.141 qpair failed and we were unable to recover it. 00:32:49.141 [2024-11-20 14:52:00.859377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.141 [2024-11-20 14:52:00.859433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.141 [2024-11-20 14:52:00.859448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.141 [2024-11-20 14:52:00.859455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.141 [2024-11-20 14:52:00.859461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.141 [2024-11-20 14:52:00.859476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.141 qpair failed and we were unable to recover it. 00:32:49.141 [2024-11-20 14:52:00.869538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.141 [2024-11-20 14:52:00.869595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.141 [2024-11-20 14:52:00.869610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.141 [2024-11-20 14:52:00.869616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.141 [2024-11-20 14:52:00.869622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.141 [2024-11-20 14:52:00.869637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.141 qpair failed and we were unable to recover it. 00:32:49.141 [2024-11-20 14:52:00.879579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.141 [2024-11-20 14:52:00.879640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.141 [2024-11-20 14:52:00.879657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.141 [2024-11-20 14:52:00.879664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.141 [2024-11-20 14:52:00.879670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.141 [2024-11-20 14:52:00.879684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.141 qpair failed and we were unable to recover it. 00:32:49.141 [2024-11-20 14:52:00.889518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.141 [2024-11-20 14:52:00.889577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.141 [2024-11-20 14:52:00.889592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.141 [2024-11-20 14:52:00.889599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.141 [2024-11-20 14:52:00.889605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.141 [2024-11-20 14:52:00.889619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.141 qpair failed and we were unable to recover it. 00:32:49.141 [2024-11-20 14:52:00.899608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.141 [2024-11-20 14:52:00.899663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.141 [2024-11-20 14:52:00.899677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.141 [2024-11-20 14:52:00.899684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.141 [2024-11-20 14:52:00.899691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.141 [2024-11-20 14:52:00.899705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.142 qpair failed and we were unable to recover it. 00:32:49.142 [2024-11-20 14:52:00.909612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.142 [2024-11-20 14:52:00.909682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.142 [2024-11-20 14:52:00.909696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.142 [2024-11-20 14:52:00.909703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.142 [2024-11-20 14:52:00.909709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.142 [2024-11-20 14:52:00.909724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.142 qpair failed and we were unable to recover it. 00:32:49.142 [2024-11-20 14:52:00.919629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.142 [2024-11-20 14:52:00.919680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.142 [2024-11-20 14:52:00.919695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.142 [2024-11-20 14:52:00.919701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.142 [2024-11-20 14:52:00.919713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.142 [2024-11-20 14:52:00.919728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.142 qpair failed and we were unable to recover it. 00:32:49.142 [2024-11-20 14:52:00.929678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.142 [2024-11-20 14:52:00.929726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.142 [2024-11-20 14:52:00.929740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.142 [2024-11-20 14:52:00.929746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.142 [2024-11-20 14:52:00.929752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.142 [2024-11-20 14:52:00.929766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.142 qpair failed and we were unable to recover it. 00:32:49.142 [2024-11-20 14:52:00.939609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.142 [2024-11-20 14:52:00.939662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.142 [2024-11-20 14:52:00.939676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.142 [2024-11-20 14:52:00.939683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.142 [2024-11-20 14:52:00.939689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.142 [2024-11-20 14:52:00.939704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.142 qpair failed and we were unable to recover it. 00:32:49.142 [2024-11-20 14:52:00.949650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.142 [2024-11-20 14:52:00.949708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.142 [2024-11-20 14:52:00.949724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.142 [2024-11-20 14:52:00.949731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.142 [2024-11-20 14:52:00.949737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.142 [2024-11-20 14:52:00.949752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.142 qpair failed and we were unable to recover it. 00:32:49.142 [2024-11-20 14:52:00.959716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.142 [2024-11-20 14:52:00.959771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.142 [2024-11-20 14:52:00.959785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.142 [2024-11-20 14:52:00.959792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.142 [2024-11-20 14:52:00.959798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.142 [2024-11-20 14:52:00.959813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.142 qpair failed and we were unable to recover it. 00:32:49.142 [2024-11-20 14:52:00.969699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.142 [2024-11-20 14:52:00.969751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.142 [2024-11-20 14:52:00.969766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.142 [2024-11-20 14:52:00.969773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.142 [2024-11-20 14:52:00.969779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.142 [2024-11-20 14:52:00.969794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.142 qpair failed and we were unable to recover it. 00:32:49.142 [2024-11-20 14:52:00.979724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.142 [2024-11-20 14:52:00.979779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.142 [2024-11-20 14:52:00.979794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.142 [2024-11-20 14:52:00.979800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.142 [2024-11-20 14:52:00.979807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.142 [2024-11-20 14:52:00.979821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.142 qpair failed and we were unable to recover it. 00:32:49.142 [2024-11-20 14:52:00.989831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.142 [2024-11-20 14:52:00.989889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.142 [2024-11-20 14:52:00.989903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.142 [2024-11-20 14:52:00.989910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.142 [2024-11-20 14:52:00.989916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.142 [2024-11-20 14:52:00.989930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.142 qpair failed and we were unable to recover it. 00:32:49.142 [2024-11-20 14:52:00.999840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.142 [2024-11-20 14:52:00.999918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.142 [2024-11-20 14:52:00.999932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.142 [2024-11-20 14:52:00.999939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.142 [2024-11-20 14:52:00.999945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.142 [2024-11-20 14:52:00.999964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.142 qpair failed and we were unable to recover it. 00:32:49.142 [2024-11-20 14:52:01.009890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.142 [2024-11-20 14:52:01.009944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.142 [2024-11-20 14:52:01.009965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.142 [2024-11-20 14:52:01.009972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.142 [2024-11-20 14:52:01.009978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.142 [2024-11-20 14:52:01.009992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.142 qpair failed and we were unable to recover it. 00:32:49.142 [2024-11-20 14:52:01.019908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.142 [2024-11-20 14:52:01.019972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.142 [2024-11-20 14:52:01.019987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.142 [2024-11-20 14:52:01.019993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.142 [2024-11-20 14:52:01.019999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.142 [2024-11-20 14:52:01.020014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.142 qpair failed and we were unable to recover it. 00:32:49.142 [2024-11-20 14:52:01.029877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.142 [2024-11-20 14:52:01.029952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.142 [2024-11-20 14:52:01.029966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.142 [2024-11-20 14:52:01.029973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.142 [2024-11-20 14:52:01.029979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.142 [2024-11-20 14:52:01.029994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.142 qpair failed and we were unable to recover it. 00:32:49.143 [2024-11-20 14:52:01.039963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.143 [2024-11-20 14:52:01.040020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.143 [2024-11-20 14:52:01.040034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.143 [2024-11-20 14:52:01.040040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.143 [2024-11-20 14:52:01.040046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.143 [2024-11-20 14:52:01.040061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.143 qpair failed and we were unable to recover it. 00:32:49.143 [2024-11-20 14:52:01.049984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.143 [2024-11-20 14:52:01.050041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.143 [2024-11-20 14:52:01.050055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.143 [2024-11-20 14:52:01.050062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.143 [2024-11-20 14:52:01.050070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.143 [2024-11-20 14:52:01.050086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.143 qpair failed and we were unable to recover it. 00:32:49.143 [2024-11-20 14:52:01.060007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.143 [2024-11-20 14:52:01.060084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.143 [2024-11-20 14:52:01.060098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.143 [2024-11-20 14:52:01.060105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.143 [2024-11-20 14:52:01.060111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.143 [2024-11-20 14:52:01.060125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.143 qpair failed and we were unable to recover it. 00:32:49.143 [2024-11-20 14:52:01.070058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.143 [2024-11-20 14:52:01.070132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.143 [2024-11-20 14:52:01.070147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.143 [2024-11-20 14:52:01.070154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.143 [2024-11-20 14:52:01.070159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.143 [2024-11-20 14:52:01.070175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.143 qpair failed and we were unable to recover it. 00:32:49.143 [2024-11-20 14:52:01.080042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.143 [2024-11-20 14:52:01.080132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.143 [2024-11-20 14:52:01.080145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.143 [2024-11-20 14:52:01.080152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.143 [2024-11-20 14:52:01.080158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.143 [2024-11-20 14:52:01.080172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.143 qpair failed and we were unable to recover it. 00:32:49.143 [2024-11-20 14:52:01.090110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.143 [2024-11-20 14:52:01.090163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.143 [2024-11-20 14:52:01.090177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.143 [2024-11-20 14:52:01.090184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.143 [2024-11-20 14:52:01.090190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.143 [2024-11-20 14:52:01.090204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.143 qpair failed and we were unable to recover it. 00:32:49.404 [2024-11-20 14:52:01.100169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.404 [2024-11-20 14:52:01.100227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.404 [2024-11-20 14:52:01.100243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.404 [2024-11-20 14:52:01.100250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.404 [2024-11-20 14:52:01.100255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.404 [2024-11-20 14:52:01.100271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.404 qpair failed and we were unable to recover it. 00:32:49.404 [2024-11-20 14:52:01.110176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.404 [2024-11-20 14:52:01.110245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.404 [2024-11-20 14:52:01.110260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.404 [2024-11-20 14:52:01.110267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.404 [2024-11-20 14:52:01.110273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.404 [2024-11-20 14:52:01.110288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.404 qpair failed and we were unable to recover it. 00:32:49.404 [2024-11-20 14:52:01.120212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.404 [2024-11-20 14:52:01.120268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.404 [2024-11-20 14:52:01.120283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.404 [2024-11-20 14:52:01.120290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.404 [2024-11-20 14:52:01.120296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.404 [2024-11-20 14:52:01.120310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.404 qpair failed and we were unable to recover it. 00:32:49.404 [2024-11-20 14:52:01.130227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.404 [2024-11-20 14:52:01.130283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.404 [2024-11-20 14:52:01.130297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.404 [2024-11-20 14:52:01.130304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.404 [2024-11-20 14:52:01.130310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.404 [2024-11-20 14:52:01.130324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.404 qpair failed and we were unable to recover it. 00:32:49.404 [2024-11-20 14:52:01.140202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.404 [2024-11-20 14:52:01.140257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.404 [2024-11-20 14:52:01.140276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.404 [2024-11-20 14:52:01.140283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.404 [2024-11-20 14:52:01.140288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.404 [2024-11-20 14:52:01.140303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.404 qpair failed and we were unable to recover it. 00:32:49.404 [2024-11-20 14:52:01.150326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.404 [2024-11-20 14:52:01.150385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.404 [2024-11-20 14:52:01.150399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.404 [2024-11-20 14:52:01.150406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.404 [2024-11-20 14:52:01.150412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.404 [2024-11-20 14:52:01.150426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.405 qpair failed and we were unable to recover it. 00:32:49.405 [2024-11-20 14:52:01.160311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.405 [2024-11-20 14:52:01.160368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.405 [2024-11-20 14:52:01.160382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.405 [2024-11-20 14:52:01.160388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.405 [2024-11-20 14:52:01.160395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.405 [2024-11-20 14:52:01.160410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.405 qpair failed and we were unable to recover it. 00:32:49.405 [2024-11-20 14:52:01.170343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.405 [2024-11-20 14:52:01.170393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.405 [2024-11-20 14:52:01.170408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.405 [2024-11-20 14:52:01.170414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.405 [2024-11-20 14:52:01.170420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.405 [2024-11-20 14:52:01.170435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.405 qpair failed and we were unable to recover it. 00:32:49.405 [2024-11-20 14:52:01.180284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.405 [2024-11-20 14:52:01.180339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.405 [2024-11-20 14:52:01.180354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.405 [2024-11-20 14:52:01.180360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.405 [2024-11-20 14:52:01.180369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.405 [2024-11-20 14:52:01.180384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.405 qpair failed and we were unable to recover it. 00:32:49.405 [2024-11-20 14:52:01.190421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.405 [2024-11-20 14:52:01.190477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.405 [2024-11-20 14:52:01.190491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.405 [2024-11-20 14:52:01.190497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.405 [2024-11-20 14:52:01.190503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.405 [2024-11-20 14:52:01.190518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.405 qpair failed and we were unable to recover it. 00:32:49.405 [2024-11-20 14:52:01.200454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.405 [2024-11-20 14:52:01.200509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.405 [2024-11-20 14:52:01.200523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.405 [2024-11-20 14:52:01.200530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.405 [2024-11-20 14:52:01.200536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.405 [2024-11-20 14:52:01.200550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.405 qpair failed and we were unable to recover it. 00:32:49.405 [2024-11-20 14:52:01.210447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.405 [2024-11-20 14:52:01.210501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.405 [2024-11-20 14:52:01.210515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.405 [2024-11-20 14:52:01.210522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.405 [2024-11-20 14:52:01.210528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.405 [2024-11-20 14:52:01.210542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.405 qpair failed and we were unable to recover it. 00:32:49.405 [2024-11-20 14:52:01.220526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.405 [2024-11-20 14:52:01.220582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.405 [2024-11-20 14:52:01.220596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.405 [2024-11-20 14:52:01.220603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.405 [2024-11-20 14:52:01.220609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.405 [2024-11-20 14:52:01.220624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.405 qpair failed and we were unable to recover it. 00:32:49.405 [2024-11-20 14:52:01.230509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.405 [2024-11-20 14:52:01.230565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.405 [2024-11-20 14:52:01.230579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.405 [2024-11-20 14:52:01.230586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.405 [2024-11-20 14:52:01.230592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.405 [2024-11-20 14:52:01.230606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.405 qpair failed and we were unable to recover it. 00:32:49.405 [2024-11-20 14:52:01.240533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.405 [2024-11-20 14:52:01.240588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.405 [2024-11-20 14:52:01.240601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.405 [2024-11-20 14:52:01.240608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.405 [2024-11-20 14:52:01.240614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.405 [2024-11-20 14:52:01.240627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.405 qpair failed and we were unable to recover it. 00:32:49.405 [2024-11-20 14:52:01.250559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.405 [2024-11-20 14:52:01.250613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.405 [2024-11-20 14:52:01.250626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.405 [2024-11-20 14:52:01.250633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.405 [2024-11-20 14:52:01.250639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.405 [2024-11-20 14:52:01.250653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.405 qpair failed and we were unable to recover it. 00:32:49.405 [2024-11-20 14:52:01.260519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.405 [2024-11-20 14:52:01.260576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.405 [2024-11-20 14:52:01.260589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.405 [2024-11-20 14:52:01.260596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.405 [2024-11-20 14:52:01.260602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.405 [2024-11-20 14:52:01.260616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.405 qpair failed and we were unable to recover it. 00:32:49.405 [2024-11-20 14:52:01.270635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.405 [2024-11-20 14:52:01.270694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.405 [2024-11-20 14:52:01.270713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.405 [2024-11-20 14:52:01.270719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.405 [2024-11-20 14:52:01.270725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.405 [2024-11-20 14:52:01.270740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.405 qpair failed and we were unable to recover it. 00:32:49.405 [2024-11-20 14:52:01.280648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.405 [2024-11-20 14:52:01.280705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.405 [2024-11-20 14:52:01.280720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.405 [2024-11-20 14:52:01.280727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.405 [2024-11-20 14:52:01.280732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.405 [2024-11-20 14:52:01.280747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.405 qpair failed and we were unable to recover it. 00:32:49.405 [2024-11-20 14:52:01.290675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.406 [2024-11-20 14:52:01.290734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.406 [2024-11-20 14:52:01.290748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.406 [2024-11-20 14:52:01.290755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.406 [2024-11-20 14:52:01.290761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.406 [2024-11-20 14:52:01.290776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.406 qpair failed and we were unable to recover it. 00:32:49.406 [2024-11-20 14:52:01.300624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.406 [2024-11-20 14:52:01.300675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.406 [2024-11-20 14:52:01.300689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.406 [2024-11-20 14:52:01.300695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.406 [2024-11-20 14:52:01.300702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.406 [2024-11-20 14:52:01.300716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.406 qpair failed and we were unable to recover it. 00:32:49.406 [2024-11-20 14:52:01.310689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.406 [2024-11-20 14:52:01.310777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.406 [2024-11-20 14:52:01.310791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.406 [2024-11-20 14:52:01.310797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.406 [2024-11-20 14:52:01.310806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.406 [2024-11-20 14:52:01.310821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.406 qpair failed and we were unable to recover it. 00:32:49.406 [2024-11-20 14:52:01.320778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.406 [2024-11-20 14:52:01.320835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.406 [2024-11-20 14:52:01.320849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.406 [2024-11-20 14:52:01.320856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.406 [2024-11-20 14:52:01.320862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.406 [2024-11-20 14:52:01.320876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.406 qpair failed and we were unable to recover it. 00:32:49.406 [2024-11-20 14:52:01.330783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.406 [2024-11-20 14:52:01.330841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.406 [2024-11-20 14:52:01.330855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.406 [2024-11-20 14:52:01.330862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.406 [2024-11-20 14:52:01.330868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.406 [2024-11-20 14:52:01.330882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.406 qpair failed and we were unable to recover it. 00:32:49.406 [2024-11-20 14:52:01.340811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.406 [2024-11-20 14:52:01.340869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.406 [2024-11-20 14:52:01.340883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.406 [2024-11-20 14:52:01.340890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.406 [2024-11-20 14:52:01.340895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.406 [2024-11-20 14:52:01.340909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.406 qpair failed and we were unable to recover it. 00:32:49.406 [2024-11-20 14:52:01.350848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.406 [2024-11-20 14:52:01.350908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.406 [2024-11-20 14:52:01.350922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.406 [2024-11-20 14:52:01.350929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.406 [2024-11-20 14:52:01.350934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.406 [2024-11-20 14:52:01.350951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.406 qpair failed and we were unable to recover it. 00:32:49.667 [2024-11-20 14:52:01.360874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.667 [2024-11-20 14:52:01.360930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.667 [2024-11-20 14:52:01.360945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.667 [2024-11-20 14:52:01.360956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.667 [2024-11-20 14:52:01.360962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.667 [2024-11-20 14:52:01.360977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.667 qpair failed and we were unable to recover it. 00:32:49.667 [2024-11-20 14:52:01.370895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.667 [2024-11-20 14:52:01.370968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.667 [2024-11-20 14:52:01.370984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.667 [2024-11-20 14:52:01.370992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.667 [2024-11-20 14:52:01.370998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.667 [2024-11-20 14:52:01.371013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.667 qpair failed and we were unable to recover it. 00:32:49.667 [2024-11-20 14:52:01.380917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.667 [2024-11-20 14:52:01.380977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.667 [2024-11-20 14:52:01.380992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.667 [2024-11-20 14:52:01.380999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.667 [2024-11-20 14:52:01.381005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.667 [2024-11-20 14:52:01.381020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.667 qpair failed and we were unable to recover it. 00:32:49.667 [2024-11-20 14:52:01.390964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.667 [2024-11-20 14:52:01.391033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.667 [2024-11-20 14:52:01.391046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.667 [2024-11-20 14:52:01.391053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.667 [2024-11-20 14:52:01.391058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.667 [2024-11-20 14:52:01.391073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.667 qpair failed and we were unable to recover it. 00:32:49.667 [2024-11-20 14:52:01.400996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.667 [2024-11-20 14:52:01.401055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.667 [2024-11-20 14:52:01.401072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.667 [2024-11-20 14:52:01.401079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.667 [2024-11-20 14:52:01.401085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.667 [2024-11-20 14:52:01.401099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.667 qpair failed and we were unable to recover it. 00:32:49.667 [2024-11-20 14:52:01.411002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.667 [2024-11-20 14:52:01.411058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.667 [2024-11-20 14:52:01.411073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.667 [2024-11-20 14:52:01.411079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.667 [2024-11-20 14:52:01.411085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.667 [2024-11-20 14:52:01.411100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.667 qpair failed and we were unable to recover it. 00:32:49.667 [2024-11-20 14:52:01.421029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.667 [2024-11-20 14:52:01.421086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.667 [2024-11-20 14:52:01.421100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.667 [2024-11-20 14:52:01.421107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.667 [2024-11-20 14:52:01.421113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.667 [2024-11-20 14:52:01.421126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.667 qpair failed and we were unable to recover it. 00:32:49.667 [2024-11-20 14:52:01.431005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.667 [2024-11-20 14:52:01.431061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.667 [2024-11-20 14:52:01.431075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.667 [2024-11-20 14:52:01.431082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.667 [2024-11-20 14:52:01.431088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.667 [2024-11-20 14:52:01.431103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.667 qpair failed and we were unable to recover it. 00:32:49.667 [2024-11-20 14:52:01.441108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.667 [2024-11-20 14:52:01.441183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.667 [2024-11-20 14:52:01.441197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.667 [2024-11-20 14:52:01.441204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.667 [2024-11-20 14:52:01.441213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.667 [2024-11-20 14:52:01.441227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.667 qpair failed and we were unable to recover it. 00:32:49.667 [2024-11-20 14:52:01.451129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.667 [2024-11-20 14:52:01.451180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.667 [2024-11-20 14:52:01.451195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.667 [2024-11-20 14:52:01.451201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.667 [2024-11-20 14:52:01.451207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.667 [2024-11-20 14:52:01.451223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.667 qpair failed and we were unable to recover it. 00:32:49.667 [2024-11-20 14:52:01.461132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.667 [2024-11-20 14:52:01.461187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.667 [2024-11-20 14:52:01.461201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.667 [2024-11-20 14:52:01.461207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.667 [2024-11-20 14:52:01.461213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.667 [2024-11-20 14:52:01.461228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.667 qpair failed and we were unable to recover it. 00:32:49.667 [2024-11-20 14:52:01.471188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.667 [2024-11-20 14:52:01.471252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.667 [2024-11-20 14:52:01.471267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.667 [2024-11-20 14:52:01.471273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.667 [2024-11-20 14:52:01.471279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.668 [2024-11-20 14:52:01.471294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.668 qpair failed and we were unable to recover it. 00:32:49.668 [2024-11-20 14:52:01.481216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.668 [2024-11-20 14:52:01.481273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.668 [2024-11-20 14:52:01.481287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.668 [2024-11-20 14:52:01.481294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.668 [2024-11-20 14:52:01.481300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.668 [2024-11-20 14:52:01.481314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.668 qpair failed and we were unable to recover it. 00:32:49.668 [2024-11-20 14:52:01.491255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.668 [2024-11-20 14:52:01.491308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.668 [2024-11-20 14:52:01.491322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.668 [2024-11-20 14:52:01.491329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.668 [2024-11-20 14:52:01.491334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.668 [2024-11-20 14:52:01.491349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.668 qpair failed and we were unable to recover it. 00:32:49.668 [2024-11-20 14:52:01.501199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.668 [2024-11-20 14:52:01.501253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.668 [2024-11-20 14:52:01.501267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.668 [2024-11-20 14:52:01.501273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.668 [2024-11-20 14:52:01.501279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.668 [2024-11-20 14:52:01.501294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.668 qpair failed and we were unable to recover it. 00:32:49.668 [2024-11-20 14:52:01.511255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.668 [2024-11-20 14:52:01.511342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.668 [2024-11-20 14:52:01.511356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.668 [2024-11-20 14:52:01.511363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.668 [2024-11-20 14:52:01.511370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.668 [2024-11-20 14:52:01.511383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.668 qpair failed and we were unable to recover it. 00:32:49.668 [2024-11-20 14:52:01.521339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.668 [2024-11-20 14:52:01.521409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.668 [2024-11-20 14:52:01.521424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.668 [2024-11-20 14:52:01.521430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.668 [2024-11-20 14:52:01.521436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.668 [2024-11-20 14:52:01.521451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.668 qpair failed and we were unable to recover it. 00:32:49.668 [2024-11-20 14:52:01.531347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.668 [2024-11-20 14:52:01.531402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.668 [2024-11-20 14:52:01.531419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.668 [2024-11-20 14:52:01.531426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.668 [2024-11-20 14:52:01.531431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.668 [2024-11-20 14:52:01.531446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.668 qpair failed and we were unable to recover it. 00:32:49.668 [2024-11-20 14:52:01.541376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.668 [2024-11-20 14:52:01.541426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.668 [2024-11-20 14:52:01.541440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.668 [2024-11-20 14:52:01.541446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.668 [2024-11-20 14:52:01.541452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.668 [2024-11-20 14:52:01.541466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.668 qpair failed and we were unable to recover it. 00:32:49.668 [2024-11-20 14:52:01.551412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.668 [2024-11-20 14:52:01.551466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.668 [2024-11-20 14:52:01.551480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.668 [2024-11-20 14:52:01.551487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.668 [2024-11-20 14:52:01.551493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.668 [2024-11-20 14:52:01.551506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.668 qpair failed and we were unable to recover it. 00:32:49.668 [2024-11-20 14:52:01.561456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.668 [2024-11-20 14:52:01.561508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.668 [2024-11-20 14:52:01.561522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.668 [2024-11-20 14:52:01.561529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.668 [2024-11-20 14:52:01.561535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.668 [2024-11-20 14:52:01.561548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.668 qpair failed and we were unable to recover it. 00:32:49.668 [2024-11-20 14:52:01.571462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.668 [2024-11-20 14:52:01.571524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.668 [2024-11-20 14:52:01.571539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.668 [2024-11-20 14:52:01.571545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.668 [2024-11-20 14:52:01.571556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.668 [2024-11-20 14:52:01.571571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.668 qpair failed and we were unable to recover it. 00:32:49.668 [2024-11-20 14:52:01.581493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.668 [2024-11-20 14:52:01.581544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.668 [2024-11-20 14:52:01.581558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.668 [2024-11-20 14:52:01.581564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.668 [2024-11-20 14:52:01.581570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.668 [2024-11-20 14:52:01.581584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.668 qpair failed and we were unable to recover it. 00:32:49.668 [2024-11-20 14:52:01.591517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.668 [2024-11-20 14:52:01.591571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.668 [2024-11-20 14:52:01.591585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.668 [2024-11-20 14:52:01.591592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.668 [2024-11-20 14:52:01.591598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.668 [2024-11-20 14:52:01.591613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.668 qpair failed and we were unable to recover it. 00:32:49.668 [2024-11-20 14:52:01.601567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.668 [2024-11-20 14:52:01.601623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.668 [2024-11-20 14:52:01.601637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.668 [2024-11-20 14:52:01.601645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.668 [2024-11-20 14:52:01.601651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.668 [2024-11-20 14:52:01.601665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.669 qpair failed and we were unable to recover it. 00:32:49.669 [2024-11-20 14:52:01.611602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.669 [2024-11-20 14:52:01.611684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.669 [2024-11-20 14:52:01.611698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.669 [2024-11-20 14:52:01.611705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.669 [2024-11-20 14:52:01.611710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.669 [2024-11-20 14:52:01.611724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.669 qpair failed and we were unable to recover it. 00:32:49.669 [2024-11-20 14:52:01.621617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.669 [2024-11-20 14:52:01.621697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.669 [2024-11-20 14:52:01.621713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.669 [2024-11-20 14:52:01.621721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.669 [2024-11-20 14:52:01.621727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.669 [2024-11-20 14:52:01.621742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.669 qpair failed and we were unable to recover it. 00:32:49.929 [2024-11-20 14:52:01.631659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.929 [2024-11-20 14:52:01.631716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.929 [2024-11-20 14:52:01.631732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.929 [2024-11-20 14:52:01.631739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.929 [2024-11-20 14:52:01.631745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.929 [2024-11-20 14:52:01.631761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.929 qpair failed and we were unable to recover it. 00:32:49.929 [2024-11-20 14:52:01.641713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.929 [2024-11-20 14:52:01.641769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.929 [2024-11-20 14:52:01.641783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.929 [2024-11-20 14:52:01.641790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.929 [2024-11-20 14:52:01.641796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.929 [2024-11-20 14:52:01.641811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.929 qpair failed and we were unable to recover it. 00:32:49.929 [2024-11-20 14:52:01.651712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.929 [2024-11-20 14:52:01.651762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.929 [2024-11-20 14:52:01.651777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.929 [2024-11-20 14:52:01.651784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.929 [2024-11-20 14:52:01.651790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.929 [2024-11-20 14:52:01.651804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.929 qpair failed and we were unable to recover it. 00:32:49.929 [2024-11-20 14:52:01.661733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.929 [2024-11-20 14:52:01.661785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.929 [2024-11-20 14:52:01.661802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.929 [2024-11-20 14:52:01.661809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.929 [2024-11-20 14:52:01.661815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.929 [2024-11-20 14:52:01.661829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.929 qpair failed and we were unable to recover it. 00:32:49.929 [2024-11-20 14:52:01.671771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.929 [2024-11-20 14:52:01.671830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.929 [2024-11-20 14:52:01.671845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.929 [2024-11-20 14:52:01.671852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.929 [2024-11-20 14:52:01.671858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.929 [2024-11-20 14:52:01.671873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.929 qpair failed and we were unable to recover it. 00:32:49.929 [2024-11-20 14:52:01.681812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.929 [2024-11-20 14:52:01.681869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.929 [2024-11-20 14:52:01.681883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.929 [2024-11-20 14:52:01.681890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.929 [2024-11-20 14:52:01.681896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.929 [2024-11-20 14:52:01.681910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.929 qpair failed and we were unable to recover it. 00:32:49.929 [2024-11-20 14:52:01.691811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.929 [2024-11-20 14:52:01.691868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.929 [2024-11-20 14:52:01.691883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.929 [2024-11-20 14:52:01.691889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.929 [2024-11-20 14:52:01.691895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.929 [2024-11-20 14:52:01.691909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.929 qpair failed and we were unable to recover it. 00:32:49.929 [2024-11-20 14:52:01.701837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.929 [2024-11-20 14:52:01.701892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.929 [2024-11-20 14:52:01.701906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.929 [2024-11-20 14:52:01.701914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.929 [2024-11-20 14:52:01.701923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.929 [2024-11-20 14:52:01.701939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.929 qpair failed and we were unable to recover it. 00:32:49.929 [2024-11-20 14:52:01.711993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.929 [2024-11-20 14:52:01.712049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.929 [2024-11-20 14:52:01.712065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.929 [2024-11-20 14:52:01.712072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.929 [2024-11-20 14:52:01.712078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.929 [2024-11-20 14:52:01.712093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.929 qpair failed and we were unable to recover it. 00:32:49.929 [2024-11-20 14:52:01.721881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.929 [2024-11-20 14:52:01.721939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.929 [2024-11-20 14:52:01.721958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.929 [2024-11-20 14:52:01.721965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.929 [2024-11-20 14:52:01.721971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.929 [2024-11-20 14:52:01.721986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.929 qpair failed and we were unable to recover it. 00:32:49.929 [2024-11-20 14:52:01.731935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.929 [2024-11-20 14:52:01.731999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.930 [2024-11-20 14:52:01.732015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.930 [2024-11-20 14:52:01.732021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.930 [2024-11-20 14:52:01.732027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.930 [2024-11-20 14:52:01.732042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.930 qpair failed and we were unable to recover it. 00:32:49.930 [2024-11-20 14:52:01.741963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.930 [2024-11-20 14:52:01.742051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.930 [2024-11-20 14:52:01.742066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.930 [2024-11-20 14:52:01.742073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.930 [2024-11-20 14:52:01.742078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.930 [2024-11-20 14:52:01.742093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.930 qpair failed and we were unable to recover it. 00:32:49.930 [2024-11-20 14:52:01.752009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.930 [2024-11-20 14:52:01.752065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.930 [2024-11-20 14:52:01.752079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.930 [2024-11-20 14:52:01.752086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.930 [2024-11-20 14:52:01.752092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.930 [2024-11-20 14:52:01.752106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.930 qpair failed and we were unable to recover it. 00:32:49.930 [2024-11-20 14:52:01.762031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.930 [2024-11-20 14:52:01.762082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.930 [2024-11-20 14:52:01.762096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.930 [2024-11-20 14:52:01.762103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.930 [2024-11-20 14:52:01.762109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.930 [2024-11-20 14:52:01.762123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.930 qpair failed and we were unable to recover it. 00:32:49.930 [2024-11-20 14:52:01.772095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.930 [2024-11-20 14:52:01.772163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.930 [2024-11-20 14:52:01.772178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.930 [2024-11-20 14:52:01.772185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.930 [2024-11-20 14:52:01.772191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.930 [2024-11-20 14:52:01.772206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.930 qpair failed and we were unable to recover it. 00:32:49.930 [2024-11-20 14:52:01.782081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.930 [2024-11-20 14:52:01.782139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.930 [2024-11-20 14:52:01.782153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.930 [2024-11-20 14:52:01.782160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.930 [2024-11-20 14:52:01.782165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.930 [2024-11-20 14:52:01.782180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.930 qpair failed and we were unable to recover it. 00:32:49.930 [2024-11-20 14:52:01.792178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.930 [2024-11-20 14:52:01.792236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.930 [2024-11-20 14:52:01.792254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.930 [2024-11-20 14:52:01.792261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.930 [2024-11-20 14:52:01.792267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.930 [2024-11-20 14:52:01.792282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.930 qpair failed and we were unable to recover it. 00:32:49.930 [2024-11-20 14:52:01.802161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.930 [2024-11-20 14:52:01.802234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.930 [2024-11-20 14:52:01.802248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.930 [2024-11-20 14:52:01.802255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.930 [2024-11-20 14:52:01.802261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.930 [2024-11-20 14:52:01.802276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.930 qpair failed and we were unable to recover it. 00:32:49.930 [2024-11-20 14:52:01.812173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.930 [2024-11-20 14:52:01.812230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.930 [2024-11-20 14:52:01.812244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.930 [2024-11-20 14:52:01.812251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.930 [2024-11-20 14:52:01.812257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.930 [2024-11-20 14:52:01.812271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.930 qpair failed and we were unable to recover it. 00:32:49.930 [2024-11-20 14:52:01.822183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.930 [2024-11-20 14:52:01.822280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.930 [2024-11-20 14:52:01.822295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.930 [2024-11-20 14:52:01.822301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.930 [2024-11-20 14:52:01.822307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.930 [2024-11-20 14:52:01.822322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.930 qpair failed and we were unable to recover it. 00:32:49.930 [2024-11-20 14:52:01.832229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.930 [2024-11-20 14:52:01.832288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.930 [2024-11-20 14:52:01.832302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.930 [2024-11-20 14:52:01.832310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.930 [2024-11-20 14:52:01.832319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.930 [2024-11-20 14:52:01.832334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.930 qpair failed and we were unable to recover it. 00:32:49.930 [2024-11-20 14:52:01.842282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.930 [2024-11-20 14:52:01.842343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.930 [2024-11-20 14:52:01.842357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.930 [2024-11-20 14:52:01.842364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.930 [2024-11-20 14:52:01.842370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.930 [2024-11-20 14:52:01.842385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.930 qpair failed and we were unable to recover it. 00:32:49.930 [2024-11-20 14:52:01.852273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.930 [2024-11-20 14:52:01.852330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.930 [2024-11-20 14:52:01.852347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.930 [2024-11-20 14:52:01.852354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.930 [2024-11-20 14:52:01.852360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.930 [2024-11-20 14:52:01.852376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.930 qpair failed and we were unable to recover it. 00:32:49.930 [2024-11-20 14:52:01.862284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.930 [2024-11-20 14:52:01.862364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.931 [2024-11-20 14:52:01.862379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.931 [2024-11-20 14:52:01.862385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.931 [2024-11-20 14:52:01.862391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.931 [2024-11-20 14:52:01.862406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.931 qpair failed and we were unable to recover it. 00:32:49.931 [2024-11-20 14:52:01.872391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.931 [2024-11-20 14:52:01.872492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.931 [2024-11-20 14:52:01.872507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.931 [2024-11-20 14:52:01.872514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.931 [2024-11-20 14:52:01.872520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.931 [2024-11-20 14:52:01.872535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.931 qpair failed and we were unable to recover it. 00:32:49.931 [2024-11-20 14:52:01.882358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:49.931 [2024-11-20 14:52:01.882414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:49.931 [2024-11-20 14:52:01.882429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:49.931 [2024-11-20 14:52:01.882436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:49.931 [2024-11-20 14:52:01.882442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:49.931 [2024-11-20 14:52:01.882457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:49.931 qpair failed and we were unable to recover it. 00:32:50.191 [2024-11-20 14:52:01.892412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-11-20 14:52:01.892517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-11-20 14:52:01.892532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-11-20 14:52:01.892539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-11-20 14:52:01.892545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.191 [2024-11-20 14:52:01.892561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-11-20 14:52:01.902417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-11-20 14:52:01.902498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-11-20 14:52:01.902513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-11-20 14:52:01.902520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-11-20 14:52:01.902525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.191 [2024-11-20 14:52:01.902540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-11-20 14:52:01.912436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-11-20 14:52:01.912493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-11-20 14:52:01.912508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-11-20 14:52:01.912515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-11-20 14:52:01.912523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.191 [2024-11-20 14:52:01.912539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-11-20 14:52:01.922496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-11-20 14:52:01.922551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-11-20 14:52:01.922569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-11-20 14:52:01.922576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-11-20 14:52:01.922582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.191 [2024-11-20 14:52:01.922597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-11-20 14:52:01.932438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-11-20 14:52:01.932493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-11-20 14:52:01.932507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-11-20 14:52:01.932514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-11-20 14:52:01.932520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.191 [2024-11-20 14:52:01.932535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-11-20 14:52:01.942516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-11-20 14:52:01.942567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-11-20 14:52:01.942581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-11-20 14:52:01.942588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-11-20 14:52:01.942594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.191 [2024-11-20 14:52:01.942608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-11-20 14:52:01.952559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-11-20 14:52:01.952615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-11-20 14:52:01.952630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-11-20 14:52:01.952637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-11-20 14:52:01.952643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.191 [2024-11-20 14:52:01.952657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-11-20 14:52:01.962607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-11-20 14:52:01.962681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-11-20 14:52:01.962696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-11-20 14:52:01.962703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-11-20 14:52:01.962712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.191 [2024-11-20 14:52:01.962727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-11-20 14:52:01.972610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-11-20 14:52:01.972667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-11-20 14:52:01.972683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-11-20 14:52:01.972690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-11-20 14:52:01.972695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.191 [2024-11-20 14:52:01.972711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-11-20 14:52:01.982651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-11-20 14:52:01.982736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-11-20 14:52:01.982751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-11-20 14:52:01.982757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-11-20 14:52:01.982764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.191 [2024-11-20 14:52:01.982778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-11-20 14:52:01.992680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-11-20 14:52:01.992734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-11-20 14:52:01.992748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-11-20 14:52:01.992755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-11-20 14:52:01.992761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.191 [2024-11-20 14:52:01.992775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-11-20 14:52:02.002702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-11-20 14:52:02.002757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-11-20 14:52:02.002771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-11-20 14:52:02.002778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-11-20 14:52:02.002784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.191 [2024-11-20 14:52:02.002798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-11-20 14:52:02.012738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-11-20 14:52:02.012798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-11-20 14:52:02.012812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-11-20 14:52:02.012819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-11-20 14:52:02.012825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.191 [2024-11-20 14:52:02.012839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-11-20 14:52:02.022750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-11-20 14:52:02.022803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-11-20 14:52:02.022817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-11-20 14:52:02.022824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-11-20 14:52:02.022830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.191 [2024-11-20 14:52:02.022844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-11-20 14:52:02.032799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-11-20 14:52:02.032853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.191 [2024-11-20 14:52:02.032868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.191 [2024-11-20 14:52:02.032874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.191 [2024-11-20 14:52:02.032880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.191 [2024-11-20 14:52:02.032895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.191 qpair failed and we were unable to recover it. 00:32:50.191 [2024-11-20 14:52:02.042830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.191 [2024-11-20 14:52:02.042887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-11-20 14:52:02.042901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-11-20 14:52:02.042908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-11-20 14:52:02.042914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.192 [2024-11-20 14:52:02.042928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-11-20 14:52:02.052868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-11-20 14:52:02.052922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-11-20 14:52:02.052940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-11-20 14:52:02.052950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-11-20 14:52:02.052956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.192 [2024-11-20 14:52:02.052971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-11-20 14:52:02.062819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-11-20 14:52:02.062870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-11-20 14:52:02.062884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-11-20 14:52:02.062891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-11-20 14:52:02.062897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.192 [2024-11-20 14:52:02.062912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-11-20 14:52:02.072946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-11-20 14:52:02.073009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-11-20 14:52:02.073024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-11-20 14:52:02.073031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-11-20 14:52:02.073037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.192 [2024-11-20 14:52:02.073051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-11-20 14:52:02.082957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-11-20 14:52:02.083010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-11-20 14:52:02.083024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-11-20 14:52:02.083030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-11-20 14:52:02.083037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.192 [2024-11-20 14:52:02.083051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-11-20 14:52:02.092999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-11-20 14:52:02.093057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-11-20 14:52:02.093073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-11-20 14:52:02.093079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-11-20 14:52:02.093089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.192 [2024-11-20 14:52:02.093104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-11-20 14:52:02.102990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-11-20 14:52:02.103045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-11-20 14:52:02.103059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-11-20 14:52:02.103066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-11-20 14:52:02.103072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.192 [2024-11-20 14:52:02.103086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-11-20 14:52:02.113046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-11-20 14:52:02.113108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-11-20 14:52:02.113122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-11-20 14:52:02.113129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-11-20 14:52:02.113135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.192 [2024-11-20 14:52:02.113149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-11-20 14:52:02.123111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-11-20 14:52:02.123221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-11-20 14:52:02.123235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-11-20 14:52:02.123241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-11-20 14:52:02.123247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.192 [2024-11-20 14:52:02.123261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-11-20 14:52:02.133068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-11-20 14:52:02.133126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-11-20 14:52:02.133140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-11-20 14:52:02.133147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-11-20 14:52:02.133152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.192 [2024-11-20 14:52:02.133166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.192 [2024-11-20 14:52:02.143088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.192 [2024-11-20 14:52:02.143192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.192 [2024-11-20 14:52:02.143209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.192 [2024-11-20 14:52:02.143215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.192 [2024-11-20 14:52:02.143221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.192 [2024-11-20 14:52:02.143237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.192 qpair failed and we were unable to recover it. 00:32:50.453 [2024-11-20 14:52:02.153188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.453 [2024-11-20 14:52:02.153242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.453 [2024-11-20 14:52:02.153258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.453 [2024-11-20 14:52:02.153265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.453 [2024-11-20 14:52:02.153271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.453 [2024-11-20 14:52:02.153286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.453 qpair failed and we were unable to recover it. 00:32:50.453 [2024-11-20 14:52:02.163113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.453 [2024-11-20 14:52:02.163166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.453 [2024-11-20 14:52:02.163181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.453 [2024-11-20 14:52:02.163188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.453 [2024-11-20 14:52:02.163194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.453 [2024-11-20 14:52:02.163209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.453 qpair failed and we were unable to recover it. 00:32:50.453 [2024-11-20 14:52:02.173242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.453 [2024-11-20 14:52:02.173295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.453 [2024-11-20 14:52:02.173310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.453 [2024-11-20 14:52:02.173317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.453 [2024-11-20 14:52:02.173323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.453 [2024-11-20 14:52:02.173337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.453 qpair failed and we were unable to recover it. 00:32:50.453 [2024-11-20 14:52:02.183178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.453 [2024-11-20 14:52:02.183230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.453 [2024-11-20 14:52:02.183247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.453 [2024-11-20 14:52:02.183254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.453 [2024-11-20 14:52:02.183260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.453 [2024-11-20 14:52:02.183275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.453 qpair failed and we were unable to recover it. 00:32:50.453 [2024-11-20 14:52:02.193256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.453 [2024-11-20 14:52:02.193311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.453 [2024-11-20 14:52:02.193326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.453 [2024-11-20 14:52:02.193333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.453 [2024-11-20 14:52:02.193339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.453 [2024-11-20 14:52:02.193353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.453 qpair failed and we were unable to recover it. 00:32:50.453 [2024-11-20 14:52:02.203289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.453 [2024-11-20 14:52:02.203373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.453 [2024-11-20 14:52:02.203389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.453 [2024-11-20 14:52:02.203396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.453 [2024-11-20 14:52:02.203403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.453 [2024-11-20 14:52:02.203419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.453 qpair failed and we were unable to recover it. 00:32:50.453 [2024-11-20 14:52:02.213347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.453 [2024-11-20 14:52:02.213401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.453 [2024-11-20 14:52:02.213416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.453 [2024-11-20 14:52:02.213423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.453 [2024-11-20 14:52:02.213428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.453 [2024-11-20 14:52:02.213444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.453 qpair failed and we were unable to recover it. 00:32:50.453 [2024-11-20 14:52:02.223275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.453 [2024-11-20 14:52:02.223331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.453 [2024-11-20 14:52:02.223346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.453 [2024-11-20 14:52:02.223353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.453 [2024-11-20 14:52:02.223365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.453 [2024-11-20 14:52:02.223380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.453 qpair failed and we were unable to recover it. 00:32:50.453 [2024-11-20 14:52:02.233362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.453 [2024-11-20 14:52:02.233441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.454 [2024-11-20 14:52:02.233455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.454 [2024-11-20 14:52:02.233462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.454 [2024-11-20 14:52:02.233468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.454 [2024-11-20 14:52:02.233482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.454 qpair failed and we were unable to recover it. 00:32:50.454 [2024-11-20 14:52:02.243352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.454 [2024-11-20 14:52:02.243406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.454 [2024-11-20 14:52:02.243421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.454 [2024-11-20 14:52:02.243427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.454 [2024-11-20 14:52:02.243433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.454 [2024-11-20 14:52:02.243448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.454 qpair failed and we were unable to recover it. 00:32:50.454 [2024-11-20 14:52:02.253349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.454 [2024-11-20 14:52:02.253403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.454 [2024-11-20 14:52:02.253418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.454 [2024-11-20 14:52:02.253424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.454 [2024-11-20 14:52:02.253430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.454 [2024-11-20 14:52:02.253444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.454 qpair failed and we were unable to recover it. 00:32:50.454 [2024-11-20 14:52:02.263409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.454 [2024-11-20 14:52:02.263509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.454 [2024-11-20 14:52:02.263523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.454 [2024-11-20 14:52:02.263529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.454 [2024-11-20 14:52:02.263535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.454 [2024-11-20 14:52:02.263549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.454 qpair failed and we were unable to recover it. 00:32:50.454 [2024-11-20 14:52:02.273483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.454 [2024-11-20 14:52:02.273557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.454 [2024-11-20 14:52:02.273571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.454 [2024-11-20 14:52:02.273578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.454 [2024-11-20 14:52:02.273584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.454 [2024-11-20 14:52:02.273599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.454 qpair failed and we were unable to recover it. 00:32:50.454 [2024-11-20 14:52:02.283500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.454 [2024-11-20 14:52:02.283558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.454 [2024-11-20 14:52:02.283572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.454 [2024-11-20 14:52:02.283579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.454 [2024-11-20 14:52:02.283584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.454 [2024-11-20 14:52:02.283598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.454 qpair failed and we were unable to recover it. 00:32:50.454 [2024-11-20 14:52:02.293459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.454 [2024-11-20 14:52:02.293512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.454 [2024-11-20 14:52:02.293526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.454 [2024-11-20 14:52:02.293532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.454 [2024-11-20 14:52:02.293538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.454 [2024-11-20 14:52:02.293553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.454 qpair failed and we were unable to recover it. 00:32:50.454 [2024-11-20 14:52:02.303553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.454 [2024-11-20 14:52:02.303624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.454 [2024-11-20 14:52:02.303639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.454 [2024-11-20 14:52:02.303645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.454 [2024-11-20 14:52:02.303651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.454 [2024-11-20 14:52:02.303665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.454 qpair failed and we were unable to recover it. 00:32:50.454 [2024-11-20 14:52:02.313620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.454 [2024-11-20 14:52:02.313718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.454 [2024-11-20 14:52:02.313735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.454 [2024-11-20 14:52:02.313742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.454 [2024-11-20 14:52:02.313747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.454 [2024-11-20 14:52:02.313761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.454 qpair failed and we were unable to recover it. 00:32:50.454 [2024-11-20 14:52:02.323586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.454 [2024-11-20 14:52:02.323651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.454 [2024-11-20 14:52:02.323665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.454 [2024-11-20 14:52:02.323671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.454 [2024-11-20 14:52:02.323677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.454 [2024-11-20 14:52:02.323691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.454 qpair failed and we were unable to recover it. 00:32:50.454 [2024-11-20 14:52:02.333629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.454 [2024-11-20 14:52:02.333689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.454 [2024-11-20 14:52:02.333703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.454 [2024-11-20 14:52:02.333710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.454 [2024-11-20 14:52:02.333715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.454 [2024-11-20 14:52:02.333729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.454 qpair failed and we were unable to recover it. 00:32:50.454 [2024-11-20 14:52:02.343615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.454 [2024-11-20 14:52:02.343666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.454 [2024-11-20 14:52:02.343679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.454 [2024-11-20 14:52:02.343685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.454 [2024-11-20 14:52:02.343691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.454 [2024-11-20 14:52:02.343706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.454 qpair failed and we were unable to recover it. 00:32:50.454 [2024-11-20 14:52:02.353710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.454 [2024-11-20 14:52:02.353781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.454 [2024-11-20 14:52:02.353795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.454 [2024-11-20 14:52:02.353801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.454 [2024-11-20 14:52:02.353811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.454 [2024-11-20 14:52:02.353825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.454 qpair failed and we were unable to recover it. 00:32:50.454 [2024-11-20 14:52:02.363773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.454 [2024-11-20 14:52:02.363831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.454 [2024-11-20 14:52:02.363845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.455 [2024-11-20 14:52:02.363852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.455 [2024-11-20 14:52:02.363857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.455 [2024-11-20 14:52:02.363872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.455 qpair failed and we were unable to recover it. 00:32:50.455 [2024-11-20 14:52:02.373757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.455 [2024-11-20 14:52:02.373813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.455 [2024-11-20 14:52:02.373828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.455 [2024-11-20 14:52:02.373835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.455 [2024-11-20 14:52:02.373841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.455 [2024-11-20 14:52:02.373856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.455 qpair failed and we were unable to recover it. 00:32:50.455 [2024-11-20 14:52:02.383777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.455 [2024-11-20 14:52:02.383851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.455 [2024-11-20 14:52:02.383866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.455 [2024-11-20 14:52:02.383872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.455 [2024-11-20 14:52:02.383878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.455 [2024-11-20 14:52:02.383892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.455 qpair failed and we were unable to recover it. 00:32:50.455 [2024-11-20 14:52:02.393816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.455 [2024-11-20 14:52:02.393875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.455 [2024-11-20 14:52:02.393889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.455 [2024-11-20 14:52:02.393896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.455 [2024-11-20 14:52:02.393902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.455 [2024-11-20 14:52:02.393916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.455 qpair failed and we were unable to recover it. 00:32:50.455 [2024-11-20 14:52:02.403862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.455 [2024-11-20 14:52:02.403927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.455 [2024-11-20 14:52:02.403942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.455 [2024-11-20 14:52:02.403953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.455 [2024-11-20 14:52:02.403959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.455 [2024-11-20 14:52:02.403974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.455 qpair failed and we were unable to recover it. 00:32:50.716 [2024-11-20 14:52:02.413871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.716 [2024-11-20 14:52:02.413966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.716 [2024-11-20 14:52:02.413982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.716 [2024-11-20 14:52:02.413989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.716 [2024-11-20 14:52:02.413995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.716 [2024-11-20 14:52:02.414009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.716 qpair failed and we were unable to recover it. 00:32:50.716 [2024-11-20 14:52:02.423945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.716 [2024-11-20 14:52:02.424005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.716 [2024-11-20 14:52:02.424022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.716 [2024-11-20 14:52:02.424029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.716 [2024-11-20 14:52:02.424035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.716 [2024-11-20 14:52:02.424050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.716 qpair failed and we were unable to recover it. 00:32:50.716 [2024-11-20 14:52:02.433898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.716 [2024-11-20 14:52:02.433969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.716 [2024-11-20 14:52:02.433983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.716 [2024-11-20 14:52:02.433990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.716 [2024-11-20 14:52:02.433996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.716 [2024-11-20 14:52:02.434010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.716 qpair failed and we were unable to recover it. 00:32:50.716 [2024-11-20 14:52:02.443994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.716 [2024-11-20 14:52:02.444073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.716 [2024-11-20 14:52:02.444091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.716 [2024-11-20 14:52:02.444097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.716 [2024-11-20 14:52:02.444103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.716 [2024-11-20 14:52:02.444117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.716 qpair failed and we were unable to recover it. 00:32:50.716 [2024-11-20 14:52:02.453983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.716 [2024-11-20 14:52:02.454041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.716 [2024-11-20 14:52:02.454057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.716 [2024-11-20 14:52:02.454065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.716 [2024-11-20 14:52:02.454072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.716 [2024-11-20 14:52:02.454088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.716 qpair failed and we were unable to recover it. 00:32:50.716 [2024-11-20 14:52:02.463942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.716 [2024-11-20 14:52:02.463997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.716 [2024-11-20 14:52:02.464012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.716 [2024-11-20 14:52:02.464019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.716 [2024-11-20 14:52:02.464025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.716 [2024-11-20 14:52:02.464040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.716 qpair failed and we were unable to recover it. 00:32:50.716 [2024-11-20 14:52:02.474038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.716 [2024-11-20 14:52:02.474093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.716 [2024-11-20 14:52:02.474108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.716 [2024-11-20 14:52:02.474115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.716 [2024-11-20 14:52:02.474122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.716 [2024-11-20 14:52:02.474138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.716 qpair failed and we were unable to recover it. 00:32:50.716 [2024-11-20 14:52:02.484070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.716 [2024-11-20 14:52:02.484147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.716 [2024-11-20 14:52:02.484161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.716 [2024-11-20 14:52:02.484168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.716 [2024-11-20 14:52:02.484177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.716 [2024-11-20 14:52:02.484192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.716 qpair failed and we were unable to recover it. 00:32:50.716 [2024-11-20 14:52:02.494161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.716 [2024-11-20 14:52:02.494257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.716 [2024-11-20 14:52:02.494271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.716 [2024-11-20 14:52:02.494277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.716 [2024-11-20 14:52:02.494284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.716 [2024-11-20 14:52:02.494299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.716 qpair failed and we were unable to recover it. 00:32:50.716 [2024-11-20 14:52:02.504125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.716 [2024-11-20 14:52:02.504193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.716 [2024-11-20 14:52:02.504206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.716 [2024-11-20 14:52:02.504213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.716 [2024-11-20 14:52:02.504219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.716 [2024-11-20 14:52:02.504234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.716 qpair failed and we were unable to recover it. 00:32:50.716 [2024-11-20 14:52:02.514158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.716 [2024-11-20 14:52:02.514212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.716 [2024-11-20 14:52:02.514226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.716 [2024-11-20 14:52:02.514233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.716 [2024-11-20 14:52:02.514239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.716 [2024-11-20 14:52:02.514253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.716 qpair failed and we were unable to recover it. 00:32:50.716 [2024-11-20 14:52:02.524246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.716 [2024-11-20 14:52:02.524306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.716 [2024-11-20 14:52:02.524321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.716 [2024-11-20 14:52:02.524327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.716 [2024-11-20 14:52:02.524333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.716 [2024-11-20 14:52:02.524347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.716 qpair failed and we were unable to recover it. 00:32:50.717 [2024-11-20 14:52:02.534268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.717 [2024-11-20 14:52:02.534323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.717 [2024-11-20 14:52:02.534337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.717 [2024-11-20 14:52:02.534344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.717 [2024-11-20 14:52:02.534349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.717 [2024-11-20 14:52:02.534365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.717 qpair failed and we were unable to recover it. 00:32:50.717 [2024-11-20 14:52:02.544251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.717 [2024-11-20 14:52:02.544323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.717 [2024-11-20 14:52:02.544336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.717 [2024-11-20 14:52:02.544343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.717 [2024-11-20 14:52:02.544349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.717 [2024-11-20 14:52:02.544363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.717 qpair failed and we were unable to recover it. 00:32:50.717 [2024-11-20 14:52:02.554266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.717 [2024-11-20 14:52:02.554338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.717 [2024-11-20 14:52:02.554352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.717 [2024-11-20 14:52:02.554359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.717 [2024-11-20 14:52:02.554364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.717 [2024-11-20 14:52:02.554379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.717 qpair failed and we were unable to recover it. 00:32:50.717 [2024-11-20 14:52:02.564296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.717 [2024-11-20 14:52:02.564349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.717 [2024-11-20 14:52:02.564362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.717 [2024-11-20 14:52:02.564369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.717 [2024-11-20 14:52:02.564375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.717 [2024-11-20 14:52:02.564388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.717 qpair failed and we were unable to recover it. 00:32:50.717 [2024-11-20 14:52:02.574333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.717 [2024-11-20 14:52:02.574382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.717 [2024-11-20 14:52:02.574400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.717 [2024-11-20 14:52:02.574406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.717 [2024-11-20 14:52:02.574412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.717 [2024-11-20 14:52:02.574427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.717 qpair failed and we were unable to recover it. 00:32:50.717 [2024-11-20 14:52:02.584392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.717 [2024-11-20 14:52:02.584445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.717 [2024-11-20 14:52:02.584459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.717 [2024-11-20 14:52:02.584466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.717 [2024-11-20 14:52:02.584472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.717 [2024-11-20 14:52:02.584486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.717 qpair failed and we were unable to recover it. 00:32:50.717 [2024-11-20 14:52:02.594417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.717 [2024-11-20 14:52:02.594516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.717 [2024-11-20 14:52:02.594530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.717 [2024-11-20 14:52:02.594537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.717 [2024-11-20 14:52:02.594543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.717 [2024-11-20 14:52:02.594557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.717 qpair failed and we were unable to recover it. 00:32:50.717 [2024-11-20 14:52:02.604393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.717 [2024-11-20 14:52:02.604447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.717 [2024-11-20 14:52:02.604461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.717 [2024-11-20 14:52:02.604467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.717 [2024-11-20 14:52:02.604473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.717 [2024-11-20 14:52:02.604488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.717 qpair failed and we were unable to recover it. 00:32:50.717 [2024-11-20 14:52:02.614434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.717 [2024-11-20 14:52:02.614494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.717 [2024-11-20 14:52:02.614508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.717 [2024-11-20 14:52:02.614514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.717 [2024-11-20 14:52:02.614524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.717 [2024-11-20 14:52:02.614538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.717 qpair failed and we were unable to recover it. 00:32:50.717 [2024-11-20 14:52:02.624478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.717 [2024-11-20 14:52:02.624534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.717 [2024-11-20 14:52:02.624548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.717 [2024-11-20 14:52:02.624555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.717 [2024-11-20 14:52:02.624561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.717 [2024-11-20 14:52:02.624576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.717 qpair failed and we were unable to recover it. 00:32:50.717 [2024-11-20 14:52:02.634438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.717 [2024-11-20 14:52:02.634498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.717 [2024-11-20 14:52:02.634511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.717 [2024-11-20 14:52:02.634518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.717 [2024-11-20 14:52:02.634524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.717 [2024-11-20 14:52:02.634538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.717 qpair failed and we were unable to recover it. 00:32:50.717 [2024-11-20 14:52:02.644444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.717 [2024-11-20 14:52:02.644499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.717 [2024-11-20 14:52:02.644513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.717 [2024-11-20 14:52:02.644520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.717 [2024-11-20 14:52:02.644525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.717 [2024-11-20 14:52:02.644539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.717 qpair failed and we were unable to recover it. 00:32:50.717 [2024-11-20 14:52:02.654536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.717 [2024-11-20 14:52:02.654589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.717 [2024-11-20 14:52:02.654603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.717 [2024-11-20 14:52:02.654609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.717 [2024-11-20 14:52:02.654616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.717 [2024-11-20 14:52:02.654630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.717 qpair failed and we were unable to recover it. 00:32:50.717 [2024-11-20 14:52:02.664562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.718 [2024-11-20 14:52:02.664614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.718 [2024-11-20 14:52:02.664627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.718 [2024-11-20 14:52:02.664634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.718 [2024-11-20 14:52:02.664640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.718 [2024-11-20 14:52:02.664654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.718 qpair failed and we were unable to recover it. 00:32:50.978 [2024-11-20 14:52:02.674641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.978 [2024-11-20 14:52:02.674700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.978 [2024-11-20 14:52:02.674717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.978 [2024-11-20 14:52:02.674724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.978 [2024-11-20 14:52:02.674730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.978 [2024-11-20 14:52:02.674746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.978 qpair failed and we were unable to recover it. 00:32:50.978 [2024-11-20 14:52:02.684642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.978 [2024-11-20 14:52:02.684713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.978 [2024-11-20 14:52:02.684728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.978 [2024-11-20 14:52:02.684735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.978 [2024-11-20 14:52:02.684741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.978 [2024-11-20 14:52:02.684756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.978 qpair failed and we were unable to recover it. 00:32:50.978 [2024-11-20 14:52:02.694694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.978 [2024-11-20 14:52:02.694749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.978 [2024-11-20 14:52:02.694763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.978 [2024-11-20 14:52:02.694770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.978 [2024-11-20 14:52:02.694776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.978 [2024-11-20 14:52:02.694791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.978 qpair failed and we were unable to recover it. 00:32:50.978 [2024-11-20 14:52:02.704696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.978 [2024-11-20 14:52:02.704751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.978 [2024-11-20 14:52:02.704768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.978 [2024-11-20 14:52:02.704776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.978 [2024-11-20 14:52:02.704782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.978 [2024-11-20 14:52:02.704798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.978 qpair failed and we were unable to recover it. 00:32:50.978 [2024-11-20 14:52:02.714685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.978 [2024-11-20 14:52:02.714743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.978 [2024-11-20 14:52:02.714757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.978 [2024-11-20 14:52:02.714764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.978 [2024-11-20 14:52:02.714770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.978 [2024-11-20 14:52:02.714786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.978 qpair failed and we were unable to recover it. 00:32:50.978 [2024-11-20 14:52:02.724763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.978 [2024-11-20 14:52:02.724852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.978 [2024-11-20 14:52:02.724866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.978 [2024-11-20 14:52:02.724873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.978 [2024-11-20 14:52:02.724879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.978 [2024-11-20 14:52:02.724892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.978 qpair failed and we were unable to recover it. 00:32:50.978 [2024-11-20 14:52:02.734831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.979 [2024-11-20 14:52:02.734887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.979 [2024-11-20 14:52:02.734901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.979 [2024-11-20 14:52:02.734908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.979 [2024-11-20 14:52:02.734914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.979 [2024-11-20 14:52:02.734928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.979 qpair failed and we were unable to recover it. 00:32:50.979 [2024-11-20 14:52:02.744806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.979 [2024-11-20 14:52:02.744862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.979 [2024-11-20 14:52:02.744876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.979 [2024-11-20 14:52:02.744883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.979 [2024-11-20 14:52:02.744892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.979 [2024-11-20 14:52:02.744907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.979 qpair failed and we were unable to recover it. 00:32:50.979 [2024-11-20 14:52:02.754884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.979 [2024-11-20 14:52:02.754940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.979 [2024-11-20 14:52:02.754958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.979 [2024-11-20 14:52:02.754965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.979 [2024-11-20 14:52:02.754971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.979 [2024-11-20 14:52:02.754986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.979 qpair failed and we were unable to recover it. 00:32:50.979 [2024-11-20 14:52:02.764883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.979 [2024-11-20 14:52:02.764934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.979 [2024-11-20 14:52:02.764953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.979 [2024-11-20 14:52:02.764960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.979 [2024-11-20 14:52:02.764966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.979 [2024-11-20 14:52:02.764981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.979 qpair failed and we were unable to recover it. 00:32:50.979 [2024-11-20 14:52:02.774904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.979 [2024-11-20 14:52:02.774958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.979 [2024-11-20 14:52:02.774973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.979 [2024-11-20 14:52:02.774979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.979 [2024-11-20 14:52:02.774985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.979 [2024-11-20 14:52:02.775001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.979 qpair failed and we were unable to recover it. 00:32:50.979 [2024-11-20 14:52:02.784924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.979 [2024-11-20 14:52:02.784984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.979 [2024-11-20 14:52:02.784998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.979 [2024-11-20 14:52:02.785005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.979 [2024-11-20 14:52:02.785011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.979 [2024-11-20 14:52:02.785026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.979 qpair failed and we were unable to recover it. 00:32:50.979 [2024-11-20 14:52:02.794968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.979 [2024-11-20 14:52:02.795041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.979 [2024-11-20 14:52:02.795055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.979 [2024-11-20 14:52:02.795061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.979 [2024-11-20 14:52:02.795067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.979 [2024-11-20 14:52:02.795082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.979 qpair failed and we were unable to recover it. 00:32:50.979 [2024-11-20 14:52:02.805002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.979 [2024-11-20 14:52:02.805058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.979 [2024-11-20 14:52:02.805072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.979 [2024-11-20 14:52:02.805078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.979 [2024-11-20 14:52:02.805084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.979 [2024-11-20 14:52:02.805098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.979 qpair failed and we were unable to recover it. 00:32:50.979 [2024-11-20 14:52:02.815030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.979 [2024-11-20 14:52:02.815086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.979 [2024-11-20 14:52:02.815100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.979 [2024-11-20 14:52:02.815106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.979 [2024-11-20 14:52:02.815112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.979 [2024-11-20 14:52:02.815126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.979 qpair failed and we were unable to recover it. 00:32:50.979 [2024-11-20 14:52:02.825040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.979 [2024-11-20 14:52:02.825092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.979 [2024-11-20 14:52:02.825107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.979 [2024-11-20 14:52:02.825114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.979 [2024-11-20 14:52:02.825119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.979 [2024-11-20 14:52:02.825133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.979 qpair failed and we were unable to recover it. 00:32:50.979 [2024-11-20 14:52:02.835086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.979 [2024-11-20 14:52:02.835168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.979 [2024-11-20 14:52:02.835185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.979 [2024-11-20 14:52:02.835192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.979 [2024-11-20 14:52:02.835198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.979 [2024-11-20 14:52:02.835212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.979 qpair failed and we were unable to recover it. 00:32:50.979 [2024-11-20 14:52:02.845122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.979 [2024-11-20 14:52:02.845180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.979 [2024-11-20 14:52:02.845197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.979 [2024-11-20 14:52:02.845204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.979 [2024-11-20 14:52:02.845210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.979 [2024-11-20 14:52:02.845226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.979 qpair failed and we were unable to recover it. 00:32:50.979 [2024-11-20 14:52:02.855148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.979 [2024-11-20 14:52:02.855199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.979 [2024-11-20 14:52:02.855213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.979 [2024-11-20 14:52:02.855220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.979 [2024-11-20 14:52:02.855226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.979 [2024-11-20 14:52:02.855240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.979 qpair failed and we were unable to recover it. 00:32:50.979 [2024-11-20 14:52:02.865160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.979 [2024-11-20 14:52:02.865216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.980 [2024-11-20 14:52:02.865230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.980 [2024-11-20 14:52:02.865237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.980 [2024-11-20 14:52:02.865243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.980 [2024-11-20 14:52:02.865258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.980 qpair failed and we were unable to recover it. 00:32:50.980 [2024-11-20 14:52:02.875193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.980 [2024-11-20 14:52:02.875252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.980 [2024-11-20 14:52:02.875265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.980 [2024-11-20 14:52:02.875272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.980 [2024-11-20 14:52:02.875283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.980 [2024-11-20 14:52:02.875298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.980 qpair failed and we were unable to recover it. 00:32:50.980 [2024-11-20 14:52:02.885228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.980 [2024-11-20 14:52:02.885279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.980 [2024-11-20 14:52:02.885293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.980 [2024-11-20 14:52:02.885300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.980 [2024-11-20 14:52:02.885306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.980 [2024-11-20 14:52:02.885321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.980 qpair failed and we were unable to recover it. 00:32:50.980 [2024-11-20 14:52:02.895257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.980 [2024-11-20 14:52:02.895328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.980 [2024-11-20 14:52:02.895342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.980 [2024-11-20 14:52:02.895349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.980 [2024-11-20 14:52:02.895354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.980 [2024-11-20 14:52:02.895368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.980 qpair failed and we were unable to recover it. 00:32:50.980 [2024-11-20 14:52:02.905289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.980 [2024-11-20 14:52:02.905343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.980 [2024-11-20 14:52:02.905357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.980 [2024-11-20 14:52:02.905363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.980 [2024-11-20 14:52:02.905369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.980 [2024-11-20 14:52:02.905383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.980 qpair failed and we were unable to recover it. 00:32:50.980 [2024-11-20 14:52:02.915318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.980 [2024-11-20 14:52:02.915392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.980 [2024-11-20 14:52:02.915407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.980 [2024-11-20 14:52:02.915414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.980 [2024-11-20 14:52:02.915419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.980 [2024-11-20 14:52:02.915434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.980 qpair failed and we were unable to recover it. 00:32:50.980 [2024-11-20 14:52:02.925345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:50.980 [2024-11-20 14:52:02.925446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:50.980 [2024-11-20 14:52:02.925460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:50.980 [2024-11-20 14:52:02.925467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:50.980 [2024-11-20 14:52:02.925473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:50.980 [2024-11-20 14:52:02.925487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:50.980 qpair failed and we were unable to recover it. 00:32:51.240 [2024-11-20 14:52:02.935400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.240 [2024-11-20 14:52:02.935471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.240 [2024-11-20 14:52:02.935487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.240 [2024-11-20 14:52:02.935494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.240 [2024-11-20 14:52:02.935500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.240 [2024-11-20 14:52:02.935515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.240 qpair failed and we were unable to recover it. 00:32:51.240 [2024-11-20 14:52:02.945399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.240 [2024-11-20 14:52:02.945455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.240 [2024-11-20 14:52:02.945471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.240 [2024-11-20 14:52:02.945478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.240 [2024-11-20 14:52:02.945484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.240 [2024-11-20 14:52:02.945498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.240 qpair failed and we were unable to recover it. 00:32:51.240 [2024-11-20 14:52:02.955440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.240 [2024-11-20 14:52:02.955504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.240 [2024-11-20 14:52:02.955519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.240 [2024-11-20 14:52:02.955526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.240 [2024-11-20 14:52:02.955532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.240 [2024-11-20 14:52:02.955546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.240 qpair failed and we were unable to recover it. 00:32:51.240 [2024-11-20 14:52:02.965446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.240 [2024-11-20 14:52:02.965498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.240 [2024-11-20 14:52:02.965516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.240 [2024-11-20 14:52:02.965523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.240 [2024-11-20 14:52:02.965529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.240 [2024-11-20 14:52:02.965543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.240 qpair failed and we were unable to recover it. 00:32:51.240 [2024-11-20 14:52:02.975468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.240 [2024-11-20 14:52:02.975546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.240 [2024-11-20 14:52:02.975560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.240 [2024-11-20 14:52:02.975566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.240 [2024-11-20 14:52:02.975572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.240 [2024-11-20 14:52:02.975587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.240 qpair failed and we were unable to recover it. 00:32:51.240 [2024-11-20 14:52:02.985514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.240 [2024-11-20 14:52:02.985566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.240 [2024-11-20 14:52:02.985580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.240 [2024-11-20 14:52:02.985587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.240 [2024-11-20 14:52:02.985593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.241 [2024-11-20 14:52:02.985607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.241 qpair failed and we were unable to recover it. 00:32:51.241 [2024-11-20 14:52:02.995553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.241 [2024-11-20 14:52:02.995616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.241 [2024-11-20 14:52:02.995630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.241 [2024-11-20 14:52:02.995637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.241 [2024-11-20 14:52:02.995643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.241 [2024-11-20 14:52:02.995656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.241 qpair failed and we were unable to recover it. 00:32:51.241 [2024-11-20 14:52:03.005491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.241 [2024-11-20 14:52:03.005542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.241 [2024-11-20 14:52:03.005555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.241 [2024-11-20 14:52:03.005562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.241 [2024-11-20 14:52:03.005571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.241 [2024-11-20 14:52:03.005585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.241 qpair failed and we were unable to recover it. 00:32:51.241 [2024-11-20 14:52:03.015596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.241 [2024-11-20 14:52:03.015651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.241 [2024-11-20 14:52:03.015665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.241 [2024-11-20 14:52:03.015672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.241 [2024-11-20 14:52:03.015678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.241 [2024-11-20 14:52:03.015692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.241 qpair failed and we were unable to recover it. 00:32:51.241 [2024-11-20 14:52:03.025611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.241 [2024-11-20 14:52:03.025664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.241 [2024-11-20 14:52:03.025678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.241 [2024-11-20 14:52:03.025684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.241 [2024-11-20 14:52:03.025690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.241 [2024-11-20 14:52:03.025704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.241 qpair failed and we were unable to recover it. 00:32:51.241 [2024-11-20 14:52:03.035653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.241 [2024-11-20 14:52:03.035709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.241 [2024-11-20 14:52:03.035723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.241 [2024-11-20 14:52:03.035729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.241 [2024-11-20 14:52:03.035735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.241 [2024-11-20 14:52:03.035749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.241 qpair failed and we were unable to recover it. 00:32:51.241 [2024-11-20 14:52:03.045700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.241 [2024-11-20 14:52:03.045760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.241 [2024-11-20 14:52:03.045775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.241 [2024-11-20 14:52:03.045781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.241 [2024-11-20 14:52:03.045787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.241 [2024-11-20 14:52:03.045802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.241 qpair failed and we were unable to recover it. 00:32:51.241 [2024-11-20 14:52:03.055708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.241 [2024-11-20 14:52:03.055785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.241 [2024-11-20 14:52:03.055799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.241 [2024-11-20 14:52:03.055805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.241 [2024-11-20 14:52:03.055811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.241 [2024-11-20 14:52:03.055825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.241 qpair failed and we were unable to recover it. 00:32:51.241 [2024-11-20 14:52:03.065734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.241 [2024-11-20 14:52:03.065786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.241 [2024-11-20 14:52:03.065800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.241 [2024-11-20 14:52:03.065807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.241 [2024-11-20 14:52:03.065813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.241 [2024-11-20 14:52:03.065827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.241 qpair failed and we were unable to recover it. 00:32:51.241 [2024-11-20 14:52:03.075773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.241 [2024-11-20 14:52:03.075833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.241 [2024-11-20 14:52:03.075847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.241 [2024-11-20 14:52:03.075854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.241 [2024-11-20 14:52:03.075859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.241 [2024-11-20 14:52:03.075874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.241 qpair failed and we were unable to recover it. 00:32:51.241 [2024-11-20 14:52:03.085795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.241 [2024-11-20 14:52:03.085846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.241 [2024-11-20 14:52:03.085860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.241 [2024-11-20 14:52:03.085866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.241 [2024-11-20 14:52:03.085872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.241 [2024-11-20 14:52:03.085886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.241 qpair failed and we were unable to recover it. 00:32:51.241 [2024-11-20 14:52:03.095824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.241 [2024-11-20 14:52:03.095879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.241 [2024-11-20 14:52:03.095896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.241 [2024-11-20 14:52:03.095903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.241 [2024-11-20 14:52:03.095909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.241 [2024-11-20 14:52:03.095923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.241 qpair failed and we were unable to recover it. 00:32:51.241 [2024-11-20 14:52:03.105853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.241 [2024-11-20 14:52:03.105925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.241 [2024-11-20 14:52:03.105939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.241 [2024-11-20 14:52:03.105951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.241 [2024-11-20 14:52:03.105957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.241 [2024-11-20 14:52:03.105972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.241 qpair failed and we were unable to recover it. 00:32:51.241 [2024-11-20 14:52:03.115890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.241 [2024-11-20 14:52:03.115944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.241 [2024-11-20 14:52:03.115961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.241 [2024-11-20 14:52:03.115968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.241 [2024-11-20 14:52:03.115974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.241 [2024-11-20 14:52:03.115988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.241 qpair failed and we were unable to recover it. 00:32:51.242 [2024-11-20 14:52:03.125915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.242 [2024-11-20 14:52:03.125974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.242 [2024-11-20 14:52:03.125989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.242 [2024-11-20 14:52:03.125995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.242 [2024-11-20 14:52:03.126001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.242 [2024-11-20 14:52:03.126016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.242 qpair failed and we were unable to recover it. 00:32:51.242 [2024-11-20 14:52:03.135925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.242 [2024-11-20 14:52:03.135982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.242 [2024-11-20 14:52:03.135996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.242 [2024-11-20 14:52:03.136003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.242 [2024-11-20 14:52:03.136012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.242 [2024-11-20 14:52:03.136027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.242 qpair failed and we were unable to recover it. 00:32:51.242 [2024-11-20 14:52:03.145992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.242 [2024-11-20 14:52:03.146049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.242 [2024-11-20 14:52:03.146063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.242 [2024-11-20 14:52:03.146069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.242 [2024-11-20 14:52:03.146075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.242 [2024-11-20 14:52:03.146090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.242 qpair failed and we were unable to recover it. 00:32:51.242 [2024-11-20 14:52:03.155927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.242 [2024-11-20 14:52:03.155993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.242 [2024-11-20 14:52:03.156007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.242 [2024-11-20 14:52:03.156014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.242 [2024-11-20 14:52:03.156020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.242 [2024-11-20 14:52:03.156035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.242 qpair failed and we were unable to recover it. 00:32:51.242 [2024-11-20 14:52:03.166101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.242 [2024-11-20 14:52:03.166185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.242 [2024-11-20 14:52:03.166199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.242 [2024-11-20 14:52:03.166205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.242 [2024-11-20 14:52:03.166211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.242 [2024-11-20 14:52:03.166226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.242 qpair failed and we were unable to recover it. 00:32:51.242 [2024-11-20 14:52:03.176045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.242 [2024-11-20 14:52:03.176111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.242 [2024-11-20 14:52:03.176125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.242 [2024-11-20 14:52:03.176132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.242 [2024-11-20 14:52:03.176138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.242 [2024-11-20 14:52:03.176152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.242 qpair failed and we were unable to recover it. 00:32:51.242 [2024-11-20 14:52:03.186076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.242 [2024-11-20 14:52:03.186125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.242 [2024-11-20 14:52:03.186140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.242 [2024-11-20 14:52:03.186146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.242 [2024-11-20 14:52:03.186153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.242 [2024-11-20 14:52:03.186168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.242 qpair failed and we were unable to recover it. 00:32:51.502 [2024-11-20 14:52:03.196167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.502 [2024-11-20 14:52:03.196270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.502 [2024-11-20 14:52:03.196286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.502 [2024-11-20 14:52:03.196293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.502 [2024-11-20 14:52:03.196299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.502 [2024-11-20 14:52:03.196314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.502 qpair failed and we were unable to recover it. 00:32:51.502 [2024-11-20 14:52:03.206168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.502 [2024-11-20 14:52:03.206236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.502 [2024-11-20 14:52:03.206252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.502 [2024-11-20 14:52:03.206258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.502 [2024-11-20 14:52:03.206264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.502 [2024-11-20 14:52:03.206279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.503 qpair failed and we were unable to recover it. 00:32:51.503 [2024-11-20 14:52:03.216179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.503 [2024-11-20 14:52:03.216230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.503 [2024-11-20 14:52:03.216244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.503 [2024-11-20 14:52:03.216251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.503 [2024-11-20 14:52:03.216257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.503 [2024-11-20 14:52:03.216272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.503 qpair failed and we were unable to recover it. 00:32:51.503 [2024-11-20 14:52:03.226230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.503 [2024-11-20 14:52:03.226326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.503 [2024-11-20 14:52:03.226342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.503 [2024-11-20 14:52:03.226349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.503 [2024-11-20 14:52:03.226354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.503 [2024-11-20 14:52:03.226369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.503 qpair failed and we were unable to recover it. 00:32:51.503 [2024-11-20 14:52:03.236256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.503 [2024-11-20 14:52:03.236316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.503 [2024-11-20 14:52:03.236330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.503 [2024-11-20 14:52:03.236336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.503 [2024-11-20 14:52:03.236342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.503 [2024-11-20 14:52:03.236357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.503 qpair failed and we were unable to recover it. 00:32:51.503 [2024-11-20 14:52:03.246307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.503 [2024-11-20 14:52:03.246360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.503 [2024-11-20 14:52:03.246375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.503 [2024-11-20 14:52:03.246381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.503 [2024-11-20 14:52:03.246388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.503 [2024-11-20 14:52:03.246403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.503 qpair failed and we were unable to recover it. 00:32:51.503 [2024-11-20 14:52:03.256353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.503 [2024-11-20 14:52:03.256408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.503 [2024-11-20 14:52:03.256423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.503 [2024-11-20 14:52:03.256429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.503 [2024-11-20 14:52:03.256435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.503 [2024-11-20 14:52:03.256449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.503 qpair failed and we were unable to recover it. 00:32:51.503 [2024-11-20 14:52:03.266326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.503 [2024-11-20 14:52:03.266381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.503 [2024-11-20 14:52:03.266396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.503 [2024-11-20 14:52:03.266403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.503 [2024-11-20 14:52:03.266412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.503 [2024-11-20 14:52:03.266427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.503 qpair failed and we were unable to recover it. 00:32:51.503 [2024-11-20 14:52:03.276359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.503 [2024-11-20 14:52:03.276416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.503 [2024-11-20 14:52:03.276431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.503 [2024-11-20 14:52:03.276437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.503 [2024-11-20 14:52:03.276443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.503 [2024-11-20 14:52:03.276457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.503 qpair failed and we were unable to recover it. 00:32:51.503 [2024-11-20 14:52:03.286392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.503 [2024-11-20 14:52:03.286446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.503 [2024-11-20 14:52:03.286461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.503 [2024-11-20 14:52:03.286467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.503 [2024-11-20 14:52:03.286473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.503 [2024-11-20 14:52:03.286487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.503 qpair failed and we were unable to recover it. 00:32:51.503 [2024-11-20 14:52:03.296344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.503 [2024-11-20 14:52:03.296399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.503 [2024-11-20 14:52:03.296413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.503 [2024-11-20 14:52:03.296420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.503 [2024-11-20 14:52:03.296426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.503 [2024-11-20 14:52:03.296440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.503 qpair failed and we were unable to recover it. 00:32:51.503 [2024-11-20 14:52:03.306492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.503 [2024-11-20 14:52:03.306549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.503 [2024-11-20 14:52:03.306563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.503 [2024-11-20 14:52:03.306570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.503 [2024-11-20 14:52:03.306576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.503 [2024-11-20 14:52:03.306590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.503 qpair failed and we were unable to recover it. 00:32:51.503 [2024-11-20 14:52:03.316404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.503 [2024-11-20 14:52:03.316460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.503 [2024-11-20 14:52:03.316474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.503 [2024-11-20 14:52:03.316481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.503 [2024-11-20 14:52:03.316487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.503 [2024-11-20 14:52:03.316501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.503 qpair failed and we were unable to recover it. 00:32:51.503 [2024-11-20 14:52:03.326502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.503 [2024-11-20 14:52:03.326562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.503 [2024-11-20 14:52:03.326576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.503 [2024-11-20 14:52:03.326583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.503 [2024-11-20 14:52:03.326589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.503 [2024-11-20 14:52:03.326603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.503 qpair failed and we were unable to recover it. 00:32:51.503 [2024-11-20 14:52:03.336558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.503 [2024-11-20 14:52:03.336622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.503 [2024-11-20 14:52:03.336635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.503 [2024-11-20 14:52:03.336642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.503 [2024-11-20 14:52:03.336648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.503 [2024-11-20 14:52:03.336662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.503 qpair failed and we were unable to recover it. 00:32:51.504 [2024-11-20 14:52:03.346617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.504 [2024-11-20 14:52:03.346715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.504 [2024-11-20 14:52:03.346729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.504 [2024-11-20 14:52:03.346735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.504 [2024-11-20 14:52:03.346741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.504 [2024-11-20 14:52:03.346756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.504 qpair failed and we were unable to recover it. 00:32:51.504 [2024-11-20 14:52:03.356600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.504 [2024-11-20 14:52:03.356658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.504 [2024-11-20 14:52:03.356675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.504 [2024-11-20 14:52:03.356682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.504 [2024-11-20 14:52:03.356688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.504 [2024-11-20 14:52:03.356702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.504 qpair failed and we were unable to recover it. 00:32:51.504 [2024-11-20 14:52:03.366627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.504 [2024-11-20 14:52:03.366691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.504 [2024-11-20 14:52:03.366706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.504 [2024-11-20 14:52:03.366713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.504 [2024-11-20 14:52:03.366719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.504 [2024-11-20 14:52:03.366734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.504 qpair failed and we were unable to recover it. 00:32:51.504 [2024-11-20 14:52:03.376646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.504 [2024-11-20 14:52:03.376696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.504 [2024-11-20 14:52:03.376710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.504 [2024-11-20 14:52:03.376717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.504 [2024-11-20 14:52:03.376723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.504 [2024-11-20 14:52:03.376737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.504 qpair failed and we were unable to recover it. 00:32:51.504 [2024-11-20 14:52:03.386666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.504 [2024-11-20 14:52:03.386743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.504 [2024-11-20 14:52:03.386757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.504 [2024-11-20 14:52:03.386764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.504 [2024-11-20 14:52:03.386769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.504 [2024-11-20 14:52:03.386784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.504 qpair failed and we were unable to recover it. 00:32:51.504 [2024-11-20 14:52:03.396710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.504 [2024-11-20 14:52:03.396769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.504 [2024-11-20 14:52:03.396783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.504 [2024-11-20 14:52:03.396789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.504 [2024-11-20 14:52:03.396798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.504 [2024-11-20 14:52:03.396813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.504 qpair failed and we were unable to recover it. 00:32:51.504 [2024-11-20 14:52:03.406745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.504 [2024-11-20 14:52:03.406801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.504 [2024-11-20 14:52:03.406815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.504 [2024-11-20 14:52:03.406822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.504 [2024-11-20 14:52:03.406828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.504 [2024-11-20 14:52:03.406842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.504 qpair failed and we were unable to recover it. 00:32:51.504 [2024-11-20 14:52:03.416781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.504 [2024-11-20 14:52:03.416841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.504 [2024-11-20 14:52:03.416855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.504 [2024-11-20 14:52:03.416862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.504 [2024-11-20 14:52:03.416867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.504 [2024-11-20 14:52:03.416882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.504 qpair failed and we were unable to recover it. 00:32:51.504 [2024-11-20 14:52:03.426801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.504 [2024-11-20 14:52:03.426850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.504 [2024-11-20 14:52:03.426864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.504 [2024-11-20 14:52:03.426871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.504 [2024-11-20 14:52:03.426877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.504 [2024-11-20 14:52:03.426891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.504 qpair failed and we were unable to recover it. 00:32:51.504 [2024-11-20 14:52:03.436836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.504 [2024-11-20 14:52:03.436892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.504 [2024-11-20 14:52:03.436906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.504 [2024-11-20 14:52:03.436913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.504 [2024-11-20 14:52:03.436919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.504 [2024-11-20 14:52:03.436933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.504 qpair failed and we were unable to recover it. 00:32:51.504 [2024-11-20 14:52:03.446879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.504 [2024-11-20 14:52:03.446956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.504 [2024-11-20 14:52:03.446970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.504 [2024-11-20 14:52:03.446977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.504 [2024-11-20 14:52:03.446983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.504 [2024-11-20 14:52:03.446997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.504 qpair failed and we were unable to recover it. 00:32:51.504 [2024-11-20 14:52:03.456912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.504 [2024-11-20 14:52:03.456977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.504 [2024-11-20 14:52:03.456993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.504 [2024-11-20 14:52:03.457000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.504 [2024-11-20 14:52:03.457006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.504 [2024-11-20 14:52:03.457021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.504 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-20 14:52:03.466916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.765 [2024-11-20 14:52:03.466987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.765 [2024-11-20 14:52:03.467004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.765 [2024-11-20 14:52:03.467011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.765 [2024-11-20 14:52:03.467017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.765 [2024-11-20 14:52:03.467033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-20 14:52:03.476944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.765 [2024-11-20 14:52:03.477009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.765 [2024-11-20 14:52:03.477023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.765 [2024-11-20 14:52:03.477030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.765 [2024-11-20 14:52:03.477036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.765 [2024-11-20 14:52:03.477051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-20 14:52:03.487002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.765 [2024-11-20 14:52:03.487069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.765 [2024-11-20 14:52:03.487087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.765 [2024-11-20 14:52:03.487093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.765 [2024-11-20 14:52:03.487099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.765 [2024-11-20 14:52:03.487114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-20 14:52:03.496930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.765 [2024-11-20 14:52:03.496987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.765 [2024-11-20 14:52:03.497002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.765 [2024-11-20 14:52:03.497009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.765 [2024-11-20 14:52:03.497015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.765 [2024-11-20 14:52:03.497029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-20 14:52:03.507083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.765 [2024-11-20 14:52:03.507138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.765 [2024-11-20 14:52:03.507153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.765 [2024-11-20 14:52:03.507160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.765 [2024-11-20 14:52:03.507166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.765 [2024-11-20 14:52:03.507181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-20 14:52:03.517055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.765 [2024-11-20 14:52:03.517113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.765 [2024-11-20 14:52:03.517128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.765 [2024-11-20 14:52:03.517134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.765 [2024-11-20 14:52:03.517141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.765 [2024-11-20 14:52:03.517156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-20 14:52:03.527078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.765 [2024-11-20 14:52:03.527135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.765 [2024-11-20 14:52:03.527150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.765 [2024-11-20 14:52:03.527156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.765 [2024-11-20 14:52:03.527168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.765 [2024-11-20 14:52:03.527182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-20 14:52:03.537054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.765 [2024-11-20 14:52:03.537109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.765 [2024-11-20 14:52:03.537123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.765 [2024-11-20 14:52:03.537129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.765 [2024-11-20 14:52:03.537135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.765 [2024-11-20 14:52:03.537149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-20 14:52:03.547184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.765 [2024-11-20 14:52:03.547234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.765 [2024-11-20 14:52:03.547248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.765 [2024-11-20 14:52:03.547255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.765 [2024-11-20 14:52:03.547261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.765 [2024-11-20 14:52:03.547275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.765 qpair failed and we were unable to recover it. 00:32:51.765 [2024-11-20 14:52:03.557128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.765 [2024-11-20 14:52:03.557185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.766 [2024-11-20 14:52:03.557199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.766 [2024-11-20 14:52:03.557206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.766 [2024-11-20 14:52:03.557212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.766 [2024-11-20 14:52:03.557226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-20 14:52:03.567137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.766 [2024-11-20 14:52:03.567201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.766 [2024-11-20 14:52:03.567215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.766 [2024-11-20 14:52:03.567222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.766 [2024-11-20 14:52:03.567228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.766 [2024-11-20 14:52:03.567243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-20 14:52:03.577168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.766 [2024-11-20 14:52:03.577229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.766 [2024-11-20 14:52:03.577244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.766 [2024-11-20 14:52:03.577250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.766 [2024-11-20 14:52:03.577256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.766 [2024-11-20 14:52:03.577270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-20 14:52:03.587284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.766 [2024-11-20 14:52:03.587348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.766 [2024-11-20 14:52:03.587362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.766 [2024-11-20 14:52:03.587369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.766 [2024-11-20 14:52:03.587375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.766 [2024-11-20 14:52:03.587390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-20 14:52:03.597329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.766 [2024-11-20 14:52:03.597383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.766 [2024-11-20 14:52:03.597397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.766 [2024-11-20 14:52:03.597404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.766 [2024-11-20 14:52:03.597410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.766 [2024-11-20 14:52:03.597425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-20 14:52:03.607292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.766 [2024-11-20 14:52:03.607350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.766 [2024-11-20 14:52:03.607363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.766 [2024-11-20 14:52:03.607370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.766 [2024-11-20 14:52:03.607376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.766 [2024-11-20 14:52:03.607390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-20 14:52:03.617282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.766 [2024-11-20 14:52:03.617339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.766 [2024-11-20 14:52:03.617356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.766 [2024-11-20 14:52:03.617362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.766 [2024-11-20 14:52:03.617368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.766 [2024-11-20 14:52:03.617383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-20 14:52:03.627344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.766 [2024-11-20 14:52:03.627413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.766 [2024-11-20 14:52:03.627426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.766 [2024-11-20 14:52:03.627433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.766 [2024-11-20 14:52:03.627439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.766 [2024-11-20 14:52:03.627453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-20 14:52:03.637401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.766 [2024-11-20 14:52:03.637455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.766 [2024-11-20 14:52:03.637469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.766 [2024-11-20 14:52:03.637475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.766 [2024-11-20 14:52:03.637482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.766 [2024-11-20 14:52:03.637497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-20 14:52:03.647441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.766 [2024-11-20 14:52:03.647499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.766 [2024-11-20 14:52:03.647513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.766 [2024-11-20 14:52:03.647519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.766 [2024-11-20 14:52:03.647525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.766 [2024-11-20 14:52:03.647539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-20 14:52:03.657399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.766 [2024-11-20 14:52:03.657456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.766 [2024-11-20 14:52:03.657470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.766 [2024-11-20 14:52:03.657477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.766 [2024-11-20 14:52:03.657486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.766 [2024-11-20 14:52:03.657500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-20 14:52:03.667406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.766 [2024-11-20 14:52:03.667466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.766 [2024-11-20 14:52:03.667481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.766 [2024-11-20 14:52:03.667487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.766 [2024-11-20 14:52:03.667493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.766 [2024-11-20 14:52:03.667508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-20 14:52:03.677458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.766 [2024-11-20 14:52:03.677512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.766 [2024-11-20 14:52:03.677526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.766 [2024-11-20 14:52:03.677532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.766 [2024-11-20 14:52:03.677538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.766 [2024-11-20 14:52:03.677553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.766 qpair failed and we were unable to recover it. 00:32:51.766 [2024-11-20 14:52:03.687522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.766 [2024-11-20 14:52:03.687585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.766 [2024-11-20 14:52:03.687599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.767 [2024-11-20 14:52:03.687606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.767 [2024-11-20 14:52:03.687612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.767 [2024-11-20 14:52:03.687626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-20 14:52:03.697510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.767 [2024-11-20 14:52:03.697599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.767 [2024-11-20 14:52:03.697613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.767 [2024-11-20 14:52:03.697619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.767 [2024-11-20 14:52:03.697625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.767 [2024-11-20 14:52:03.697639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-20 14:52:03.707587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.767 [2024-11-20 14:52:03.707641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.767 [2024-11-20 14:52:03.707656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.767 [2024-11-20 14:52:03.707663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.767 [2024-11-20 14:52:03.707669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.767 [2024-11-20 14:52:03.707684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.767 qpair failed and we were unable to recover it. 00:32:51.767 [2024-11-20 14:52:03.717689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:51.767 [2024-11-20 14:52:03.717790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:51.767 [2024-11-20 14:52:03.717806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:51.767 [2024-11-20 14:52:03.717812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:51.767 [2024-11-20 14:52:03.717818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:51.767 [2024-11-20 14:52:03.717834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:51.767 qpair failed and we were unable to recover it. 00:32:52.027 [2024-11-20 14:52:03.727704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.027 [2024-11-20 14:52:03.727764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.027 [2024-11-20 14:52:03.727780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.027 [2024-11-20 14:52:03.727787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.027 [2024-11-20 14:52:03.727793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.027 [2024-11-20 14:52:03.727808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.027 qpair failed and we were unable to recover it. 00:32:52.027 [2024-11-20 14:52:03.737679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.027 [2024-11-20 14:52:03.737756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.027 [2024-11-20 14:52:03.737771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.027 [2024-11-20 14:52:03.737777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.027 [2024-11-20 14:52:03.737783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.027 [2024-11-20 14:52:03.737798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.027 qpair failed and we were unable to recover it. 00:32:52.027 [2024-11-20 14:52:03.747775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.027 [2024-11-20 14:52:03.747877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.027 [2024-11-20 14:52:03.747895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.027 [2024-11-20 14:52:03.747902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.027 [2024-11-20 14:52:03.747908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.027 [2024-11-20 14:52:03.747923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.027 qpair failed and we were unable to recover it. 00:32:52.027 [2024-11-20 14:52:03.757780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.027 [2024-11-20 14:52:03.757887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.027 [2024-11-20 14:52:03.757901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.027 [2024-11-20 14:52:03.757908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.027 [2024-11-20 14:52:03.757914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.027 [2024-11-20 14:52:03.757929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.027 qpair failed and we were unable to recover it. 00:32:52.027 [2024-11-20 14:52:03.767771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.027 [2024-11-20 14:52:03.767825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.027 [2024-11-20 14:52:03.767842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.027 [2024-11-20 14:52:03.767849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.027 [2024-11-20 14:52:03.767855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.027 [2024-11-20 14:52:03.767870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.027 qpair failed and we were unable to recover it. 00:32:52.027 [2024-11-20 14:52:03.777787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.027 [2024-11-20 14:52:03.777842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.027 [2024-11-20 14:52:03.777858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.027 [2024-11-20 14:52:03.777865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.027 [2024-11-20 14:52:03.777871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.027 [2024-11-20 14:52:03.777886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.027 qpair failed and we were unable to recover it. 00:32:52.027 [2024-11-20 14:52:03.787841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.027 [2024-11-20 14:52:03.787895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.027 [2024-11-20 14:52:03.787909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.027 [2024-11-20 14:52:03.787916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.027 [2024-11-20 14:52:03.787925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.027 [2024-11-20 14:52:03.787940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.027 qpair failed and we were unable to recover it. 00:32:52.028 [2024-11-20 14:52:03.797784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.028 [2024-11-20 14:52:03.797844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.028 [2024-11-20 14:52:03.797858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.028 [2024-11-20 14:52:03.797865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.028 [2024-11-20 14:52:03.797871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.028 [2024-11-20 14:52:03.797885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.028 qpair failed and we were unable to recover it. 00:32:52.028 [2024-11-20 14:52:03.807889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.028 [2024-11-20 14:52:03.807938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.028 [2024-11-20 14:52:03.807957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.028 [2024-11-20 14:52:03.807963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.028 [2024-11-20 14:52:03.807970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.028 [2024-11-20 14:52:03.807984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.028 qpair failed and we were unable to recover it. 00:32:52.028 [2024-11-20 14:52:03.817855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.028 [2024-11-20 14:52:03.817909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.028 [2024-11-20 14:52:03.817924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.028 [2024-11-20 14:52:03.817930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.028 [2024-11-20 14:52:03.817936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.028 [2024-11-20 14:52:03.817954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.028 qpair failed and we were unable to recover it. 00:32:52.028 [2024-11-20 14:52:03.827954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.028 [2024-11-20 14:52:03.828010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.028 [2024-11-20 14:52:03.828024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.028 [2024-11-20 14:52:03.828031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.028 [2024-11-20 14:52:03.828037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.028 [2024-11-20 14:52:03.828052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.028 qpair failed and we were unable to recover it. 00:32:52.028 [2024-11-20 14:52:03.838038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.028 [2024-11-20 14:52:03.838097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.028 [2024-11-20 14:52:03.838112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.028 [2024-11-20 14:52:03.838119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.028 [2024-11-20 14:52:03.838124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.028 [2024-11-20 14:52:03.838139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.028 qpair failed and we were unable to recover it. 00:32:52.028 [2024-11-20 14:52:03.848010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.028 [2024-11-20 14:52:03.848084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.028 [2024-11-20 14:52:03.848100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.028 [2024-11-20 14:52:03.848107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.028 [2024-11-20 14:52:03.848113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.028 [2024-11-20 14:52:03.848130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.028 qpair failed and we were unable to recover it. 00:32:52.028 [2024-11-20 14:52:03.858046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.028 [2024-11-20 14:52:03.858102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.028 [2024-11-20 14:52:03.858117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.028 [2024-11-20 14:52:03.858124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.028 [2024-11-20 14:52:03.858130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.028 [2024-11-20 14:52:03.858145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.028 qpair failed and we were unable to recover it. 00:32:52.028 [2024-11-20 14:52:03.868064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.028 [2024-11-20 14:52:03.868118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.028 [2024-11-20 14:52:03.868133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.028 [2024-11-20 14:52:03.868140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.028 [2024-11-20 14:52:03.868146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.028 [2024-11-20 14:52:03.868161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.028 qpair failed and we were unable to recover it. 00:32:52.028 [2024-11-20 14:52:03.878112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.028 [2024-11-20 14:52:03.878174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.028 [2024-11-20 14:52:03.878192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.028 [2024-11-20 14:52:03.878198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.028 [2024-11-20 14:52:03.878204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.028 [2024-11-20 14:52:03.878219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.028 qpair failed and we were unable to recover it. 00:32:52.028 [2024-11-20 14:52:03.888066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.028 [2024-11-20 14:52:03.888120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.028 [2024-11-20 14:52:03.888134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.028 [2024-11-20 14:52:03.888141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.028 [2024-11-20 14:52:03.888147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.028 [2024-11-20 14:52:03.888161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.028 qpair failed and we were unable to recover it. 00:32:52.028 [2024-11-20 14:52:03.898098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.028 [2024-11-20 14:52:03.898145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.028 [2024-11-20 14:52:03.898159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.028 [2024-11-20 14:52:03.898165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.028 [2024-11-20 14:52:03.898171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.028 [2024-11-20 14:52:03.898185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.028 qpair failed and we were unable to recover it. 00:32:52.028 [2024-11-20 14:52:03.908167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.028 [2024-11-20 14:52:03.908218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.028 [2024-11-20 14:52:03.908232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.028 [2024-11-20 14:52:03.908238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.028 [2024-11-20 14:52:03.908244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.028 [2024-11-20 14:52:03.908258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.028 qpair failed and we were unable to recover it. 00:32:52.028 [2024-11-20 14:52:03.918279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.028 [2024-11-20 14:52:03.918371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.028 [2024-11-20 14:52:03.918385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.028 [2024-11-20 14:52:03.918392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.028 [2024-11-20 14:52:03.918401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.028 [2024-11-20 14:52:03.918415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.028 qpair failed and we were unable to recover it. 00:32:52.029 [2024-11-20 14:52:03.928234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.029 [2024-11-20 14:52:03.928288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.029 [2024-11-20 14:52:03.928302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.029 [2024-11-20 14:52:03.928308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.029 [2024-11-20 14:52:03.928314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.029 [2024-11-20 14:52:03.928328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.029 qpair failed and we were unable to recover it. 00:32:52.029 [2024-11-20 14:52:03.938260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.029 [2024-11-20 14:52:03.938312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.029 [2024-11-20 14:52:03.938326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.029 [2024-11-20 14:52:03.938333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.029 [2024-11-20 14:52:03.938339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.029 [2024-11-20 14:52:03.938353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.029 qpair failed and we were unable to recover it. 00:32:52.029 [2024-11-20 14:52:03.948281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.029 [2024-11-20 14:52:03.948335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.029 [2024-11-20 14:52:03.948349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.029 [2024-11-20 14:52:03.948356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.029 [2024-11-20 14:52:03.948362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.029 [2024-11-20 14:52:03.948376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.029 qpair failed and we were unable to recover it. 00:32:52.029 [2024-11-20 14:52:03.958316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.029 [2024-11-20 14:52:03.958375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.029 [2024-11-20 14:52:03.958388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.029 [2024-11-20 14:52:03.958395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.029 [2024-11-20 14:52:03.958401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.029 [2024-11-20 14:52:03.958415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.029 qpair failed and we were unable to recover it. 00:32:52.029 [2024-11-20 14:52:03.968305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.029 [2024-11-20 14:52:03.968378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.029 [2024-11-20 14:52:03.968393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.029 [2024-11-20 14:52:03.968400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.029 [2024-11-20 14:52:03.968406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.029 [2024-11-20 14:52:03.968420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.029 qpair failed and we were unable to recover it. 00:32:52.029 [2024-11-20 14:52:03.978371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.029 [2024-11-20 14:52:03.978427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.029 [2024-11-20 14:52:03.978443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.029 [2024-11-20 14:52:03.978450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.029 [2024-11-20 14:52:03.978456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.029 [2024-11-20 14:52:03.978471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.029 qpair failed and we were unable to recover it. 00:32:52.289 [2024-11-20 14:52:03.988341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.289 [2024-11-20 14:52:03.988405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.289 [2024-11-20 14:52:03.988421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.289 [2024-11-20 14:52:03.988428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.289 [2024-11-20 14:52:03.988435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.289 [2024-11-20 14:52:03.988453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.289 qpair failed and we were unable to recover it. 00:32:52.289 [2024-11-20 14:52:03.998435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.289 [2024-11-20 14:52:03.998536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.289 [2024-11-20 14:52:03.998551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.289 [2024-11-20 14:52:03.998558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.289 [2024-11-20 14:52:03.998564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.289 [2024-11-20 14:52:03.998580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.289 qpair failed and we were unable to recover it. 00:32:52.289 [2024-11-20 14:52:04.008457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.289 [2024-11-20 14:52:04.008510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.289 [2024-11-20 14:52:04.008528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.289 [2024-11-20 14:52:04.008535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.289 [2024-11-20 14:52:04.008540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.289 [2024-11-20 14:52:04.008555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.289 qpair failed and we were unable to recover it. 00:32:52.289 [2024-11-20 14:52:04.018497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.289 [2024-11-20 14:52:04.018550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.289 [2024-11-20 14:52:04.018563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.289 [2024-11-20 14:52:04.018570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.289 [2024-11-20 14:52:04.018576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.290 [2024-11-20 14:52:04.018590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.290 qpair failed and we were unable to recover it. 00:32:52.290 [2024-11-20 14:52:04.028505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.290 [2024-11-20 14:52:04.028559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.290 [2024-11-20 14:52:04.028573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.290 [2024-11-20 14:52:04.028580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.290 [2024-11-20 14:52:04.028586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.290 [2024-11-20 14:52:04.028600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.290 qpair failed and we were unable to recover it. 00:32:52.290 [2024-11-20 14:52:04.038543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.290 [2024-11-20 14:52:04.038600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.290 [2024-11-20 14:52:04.038614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.290 [2024-11-20 14:52:04.038620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.290 [2024-11-20 14:52:04.038627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.290 [2024-11-20 14:52:04.038641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.290 qpair failed and we were unable to recover it. 00:32:52.290 [2024-11-20 14:52:04.048585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.290 [2024-11-20 14:52:04.048641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.290 [2024-11-20 14:52:04.048655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.290 [2024-11-20 14:52:04.048662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.290 [2024-11-20 14:52:04.048671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.290 [2024-11-20 14:52:04.048686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.290 qpair failed and we were unable to recover it. 00:32:52.290 [2024-11-20 14:52:04.058632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.290 [2024-11-20 14:52:04.058690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.290 [2024-11-20 14:52:04.058703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.290 [2024-11-20 14:52:04.058710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.290 [2024-11-20 14:52:04.058715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.290 [2024-11-20 14:52:04.058730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.290 qpair failed and we were unable to recover it. 00:32:52.290 [2024-11-20 14:52:04.068630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.290 [2024-11-20 14:52:04.068682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.290 [2024-11-20 14:52:04.068697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.290 [2024-11-20 14:52:04.068703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.290 [2024-11-20 14:52:04.068709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.290 [2024-11-20 14:52:04.068724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.290 qpair failed and we were unable to recover it. 00:32:52.290 [2024-11-20 14:52:04.078694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.290 [2024-11-20 14:52:04.078753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.290 [2024-11-20 14:52:04.078767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.290 [2024-11-20 14:52:04.078773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.290 [2024-11-20 14:52:04.078779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.290 [2024-11-20 14:52:04.078793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.290 qpair failed and we were unable to recover it. 00:32:52.290 [2024-11-20 14:52:04.088674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.290 [2024-11-20 14:52:04.088732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.290 [2024-11-20 14:52:04.088747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.290 [2024-11-20 14:52:04.088753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.290 [2024-11-20 14:52:04.088759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.290 [2024-11-20 14:52:04.088773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.290 qpair failed and we were unable to recover it. 00:32:52.290 [2024-11-20 14:52:04.098718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.290 [2024-11-20 14:52:04.098769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.290 [2024-11-20 14:52:04.098783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.290 [2024-11-20 14:52:04.098789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.290 [2024-11-20 14:52:04.098795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.290 [2024-11-20 14:52:04.098809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.290 qpair failed and we were unable to recover it. 00:32:52.290 [2024-11-20 14:52:04.108734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.290 [2024-11-20 14:52:04.108785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.290 [2024-11-20 14:52:04.108799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.290 [2024-11-20 14:52:04.108806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.290 [2024-11-20 14:52:04.108811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.290 [2024-11-20 14:52:04.108826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.290 qpair failed and we were unable to recover it. 00:32:52.290 [2024-11-20 14:52:04.118777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.290 [2024-11-20 14:52:04.118835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.290 [2024-11-20 14:52:04.118849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.290 [2024-11-20 14:52:04.118856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.290 [2024-11-20 14:52:04.118862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.290 [2024-11-20 14:52:04.118876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.290 qpair failed and we were unable to recover it. 00:32:52.290 [2024-11-20 14:52:04.128798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.290 [2024-11-20 14:52:04.128855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.290 [2024-11-20 14:52:04.128869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.290 [2024-11-20 14:52:04.128876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.290 [2024-11-20 14:52:04.128882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.290 [2024-11-20 14:52:04.128896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.290 qpair failed and we were unable to recover it. 00:32:52.290 [2024-11-20 14:52:04.138855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.290 [2024-11-20 14:52:04.138910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.290 [2024-11-20 14:52:04.138927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.290 [2024-11-20 14:52:04.138934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.290 [2024-11-20 14:52:04.138940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.290 [2024-11-20 14:52:04.138958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.290 qpair failed and we were unable to recover it. 00:32:52.290 [2024-11-20 14:52:04.148843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.290 [2024-11-20 14:52:04.148928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.290 [2024-11-20 14:52:04.148942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.290 [2024-11-20 14:52:04.148951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.290 [2024-11-20 14:52:04.148958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.290 [2024-11-20 14:52:04.148972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.290 qpair failed and we were unable to recover it. 00:32:52.290 [2024-11-20 14:52:04.158885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.291 [2024-11-20 14:52:04.158945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.291 [2024-11-20 14:52:04.158963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.291 [2024-11-20 14:52:04.158971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.291 [2024-11-20 14:52:04.158977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.291 [2024-11-20 14:52:04.158991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.291 qpair failed and we were unable to recover it. 00:32:52.291 [2024-11-20 14:52:04.168909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.291 [2024-11-20 14:52:04.168971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.291 [2024-11-20 14:52:04.168986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.291 [2024-11-20 14:52:04.168993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.291 [2024-11-20 14:52:04.168999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.291 [2024-11-20 14:52:04.169014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.291 qpair failed and we were unable to recover it. 00:32:52.291 [2024-11-20 14:52:04.178942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.291 [2024-11-20 14:52:04.178994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.291 [2024-11-20 14:52:04.179008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.291 [2024-11-20 14:52:04.179014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.291 [2024-11-20 14:52:04.179025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.291 [2024-11-20 14:52:04.179040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.291 qpair failed and we were unable to recover it. 00:32:52.291 [2024-11-20 14:52:04.189016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.291 [2024-11-20 14:52:04.189067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.291 [2024-11-20 14:52:04.189081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.291 [2024-11-20 14:52:04.189088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.291 [2024-11-20 14:52:04.189094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.291 [2024-11-20 14:52:04.189109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.291 qpair failed and we were unable to recover it. 00:32:52.291 [2024-11-20 14:52:04.199015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.291 [2024-11-20 14:52:04.199069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.291 [2024-11-20 14:52:04.199083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.291 [2024-11-20 14:52:04.199089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.291 [2024-11-20 14:52:04.199095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.291 [2024-11-20 14:52:04.199110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.291 qpair failed and we were unable to recover it. 00:32:52.291 [2024-11-20 14:52:04.209028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.291 [2024-11-20 14:52:04.209078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.291 [2024-11-20 14:52:04.209092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.291 [2024-11-20 14:52:04.209098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.291 [2024-11-20 14:52:04.209104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.291 [2024-11-20 14:52:04.209118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.291 qpair failed and we were unable to recover it. 00:32:52.291 [2024-11-20 14:52:04.219056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.291 [2024-11-20 14:52:04.219125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.291 [2024-11-20 14:52:04.219139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.291 [2024-11-20 14:52:04.219146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.291 [2024-11-20 14:52:04.219152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.291 [2024-11-20 14:52:04.219166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.291 qpair failed and we were unable to recover it. 00:32:52.291 [2024-11-20 14:52:04.229086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.291 [2024-11-20 14:52:04.229138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.291 [2024-11-20 14:52:04.229152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.291 [2024-11-20 14:52:04.229158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.291 [2024-11-20 14:52:04.229164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.291 [2024-11-20 14:52:04.229179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.291 qpair failed and we were unable to recover it. 00:32:52.291 [2024-11-20 14:52:04.239149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.291 [2024-11-20 14:52:04.239206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.291 [2024-11-20 14:52:04.239220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.291 [2024-11-20 14:52:04.239226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.291 [2024-11-20 14:52:04.239232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.291 [2024-11-20 14:52:04.239246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.291 qpair failed and we were unable to recover it. 00:32:52.550 [2024-11-20 14:52:04.249082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.550 [2024-11-20 14:52:04.249138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.550 [2024-11-20 14:52:04.249153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.550 [2024-11-20 14:52:04.249160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.550 [2024-11-20 14:52:04.249166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.550 [2024-11-20 14:52:04.249182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.550 qpair failed and we were unable to recover it. 00:32:52.550 [2024-11-20 14:52:04.259229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.550 [2024-11-20 14:52:04.259287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.550 [2024-11-20 14:52:04.259302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.550 [2024-11-20 14:52:04.259308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.550 [2024-11-20 14:52:04.259314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.550 [2024-11-20 14:52:04.259329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.550 qpair failed and we were unable to recover it. 00:32:52.550 [2024-11-20 14:52:04.269205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.550 [2024-11-20 14:52:04.269255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.550 [2024-11-20 14:52:04.269273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.550 [2024-11-20 14:52:04.269280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.550 [2024-11-20 14:52:04.269286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.550 [2024-11-20 14:52:04.269301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.550 qpair failed and we were unable to recover it. 00:32:52.550 [2024-11-20 14:52:04.279227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.550 [2024-11-20 14:52:04.279282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.550 [2024-11-20 14:52:04.279296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.550 [2024-11-20 14:52:04.279302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.550 [2024-11-20 14:52:04.279308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.550 [2024-11-20 14:52:04.279322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.550 qpair failed and we were unable to recover it. 00:32:52.550 [2024-11-20 14:52:04.289272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.550 [2024-11-20 14:52:04.289352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.551 [2024-11-20 14:52:04.289366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.551 [2024-11-20 14:52:04.289372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.551 [2024-11-20 14:52:04.289378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.551 [2024-11-20 14:52:04.289392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.551 qpair failed and we were unable to recover it. 00:32:52.551 [2024-11-20 14:52:04.299285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.551 [2024-11-20 14:52:04.299343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.551 [2024-11-20 14:52:04.299357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.551 [2024-11-20 14:52:04.299363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.551 [2024-11-20 14:52:04.299369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.551 [2024-11-20 14:52:04.299383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.551 qpair failed and we were unable to recover it. 00:32:52.551 [2024-11-20 14:52:04.309361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.551 [2024-11-20 14:52:04.309461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.551 [2024-11-20 14:52:04.309474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.551 [2024-11-20 14:52:04.309481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.551 [2024-11-20 14:52:04.309490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.551 [2024-11-20 14:52:04.309504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.551 qpair failed and we were unable to recover it. 00:32:52.551 [2024-11-20 14:52:04.319345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.551 [2024-11-20 14:52:04.319401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.551 [2024-11-20 14:52:04.319415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.551 [2024-11-20 14:52:04.319421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.551 [2024-11-20 14:52:04.319427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.551 [2024-11-20 14:52:04.319442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.551 qpair failed and we were unable to recover it. 00:32:52.551 [2024-11-20 14:52:04.329376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.551 [2024-11-20 14:52:04.329429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.551 [2024-11-20 14:52:04.329443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.551 [2024-11-20 14:52:04.329449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.551 [2024-11-20 14:52:04.329456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.551 [2024-11-20 14:52:04.329470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.551 qpair failed and we were unable to recover it. 00:32:52.551 [2024-11-20 14:52:04.339385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.551 [2024-11-20 14:52:04.339436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.551 [2024-11-20 14:52:04.339449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.551 [2024-11-20 14:52:04.339456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.551 [2024-11-20 14:52:04.339462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.551 [2024-11-20 14:52:04.339476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.551 qpair failed and we were unable to recover it. 00:32:52.551 [2024-11-20 14:52:04.349418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.551 [2024-11-20 14:52:04.349478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.551 [2024-11-20 14:52:04.349492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.551 [2024-11-20 14:52:04.349499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.551 [2024-11-20 14:52:04.349505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.551 [2024-11-20 14:52:04.349519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.551 qpair failed and we were unable to recover it. 00:32:52.551 [2024-11-20 14:52:04.359471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.551 [2024-11-20 14:52:04.359532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.551 [2024-11-20 14:52:04.359546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.551 [2024-11-20 14:52:04.359552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.551 [2024-11-20 14:52:04.359558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.551 [2024-11-20 14:52:04.359573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.551 qpair failed and we were unable to recover it. 00:32:52.551 [2024-11-20 14:52:04.369483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.551 [2024-11-20 14:52:04.369538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.551 [2024-11-20 14:52:04.369553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.551 [2024-11-20 14:52:04.369560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.551 [2024-11-20 14:52:04.369565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.551 [2024-11-20 14:52:04.369580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.551 qpair failed and we were unable to recover it. 00:32:52.551 [2024-11-20 14:52:04.379522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.551 [2024-11-20 14:52:04.379579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.551 [2024-11-20 14:52:04.379593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.551 [2024-11-20 14:52:04.379600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.551 [2024-11-20 14:52:04.379605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.551 [2024-11-20 14:52:04.379620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.551 qpair failed and we were unable to recover it. 00:32:52.551 [2024-11-20 14:52:04.389579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.551 [2024-11-20 14:52:04.389635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.551 [2024-11-20 14:52:04.389649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.551 [2024-11-20 14:52:04.389656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.551 [2024-11-20 14:52:04.389662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.551 [2024-11-20 14:52:04.389676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.551 qpair failed and we were unable to recover it. 00:32:52.551 [2024-11-20 14:52:04.399586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.551 [2024-11-20 14:52:04.399649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.551 [2024-11-20 14:52:04.399666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.551 [2024-11-20 14:52:04.399672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.551 [2024-11-20 14:52:04.399678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.551 [2024-11-20 14:52:04.399693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.551 qpair failed and we were unable to recover it. 00:32:52.551 [2024-11-20 14:52:04.409620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.551 [2024-11-20 14:52:04.409674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.551 [2024-11-20 14:52:04.409688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.551 [2024-11-20 14:52:04.409695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.551 [2024-11-20 14:52:04.409701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.551 [2024-11-20 14:52:04.409714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.551 qpair failed and we were unable to recover it. 00:32:52.551 [2024-11-20 14:52:04.419640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.551 [2024-11-20 14:52:04.419702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.551 [2024-11-20 14:52:04.419716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.551 [2024-11-20 14:52:04.419722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.551 [2024-11-20 14:52:04.419728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.551 [2024-11-20 14:52:04.419742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.551 qpair failed and we were unable to recover it. 00:32:52.551 [2024-11-20 14:52:04.429666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.551 [2024-11-20 14:52:04.429719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.551 [2024-11-20 14:52:04.429733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.551 [2024-11-20 14:52:04.429739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.551 [2024-11-20 14:52:04.429745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.551 [2024-11-20 14:52:04.429760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.551 qpair failed and we were unable to recover it. 00:32:52.551 [2024-11-20 14:52:04.439712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.551 [2024-11-20 14:52:04.439786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.551 [2024-11-20 14:52:04.439800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.551 [2024-11-20 14:52:04.439807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.551 [2024-11-20 14:52:04.439816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.551 [2024-11-20 14:52:04.439832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.551 qpair failed and we were unable to recover it. 00:32:52.551 [2024-11-20 14:52:04.449727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.551 [2024-11-20 14:52:04.449779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.551 [2024-11-20 14:52:04.449793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.551 [2024-11-20 14:52:04.449800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.551 [2024-11-20 14:52:04.449806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.551 [2024-11-20 14:52:04.449820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.551 qpair failed and we were unable to recover it. 00:32:52.551 [2024-11-20 14:52:04.459747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.551 [2024-11-20 14:52:04.459812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.551 [2024-11-20 14:52:04.459826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.551 [2024-11-20 14:52:04.459832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.551 [2024-11-20 14:52:04.459838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.551 [2024-11-20 14:52:04.459852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.552 qpair failed and we were unable to recover it. 00:32:52.552 [2024-11-20 14:52:04.469773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.552 [2024-11-20 14:52:04.469825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.552 [2024-11-20 14:52:04.469840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.552 [2024-11-20 14:52:04.469848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.552 [2024-11-20 14:52:04.469854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.552 [2024-11-20 14:52:04.469869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.552 qpair failed and we were unable to recover it. 00:32:52.552 [2024-11-20 14:52:04.479816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.552 [2024-11-20 14:52:04.479877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.552 [2024-11-20 14:52:04.479891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.552 [2024-11-20 14:52:04.479898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.552 [2024-11-20 14:52:04.479904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.552 [2024-11-20 14:52:04.479918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.552 qpair failed and we were unable to recover it. 00:32:52.552 [2024-11-20 14:52:04.489866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.552 [2024-11-20 14:52:04.489927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.552 [2024-11-20 14:52:04.489941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.552 [2024-11-20 14:52:04.489951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.552 [2024-11-20 14:52:04.489957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.552 [2024-11-20 14:52:04.489972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.552 qpair failed and we were unable to recover it. 00:32:52.552 [2024-11-20 14:52:04.499853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.552 [2024-11-20 14:52:04.499910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.552 [2024-11-20 14:52:04.499925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.552 [2024-11-20 14:52:04.499931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.552 [2024-11-20 14:52:04.499937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.552 [2024-11-20 14:52:04.499955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.552 qpair failed and we were unable to recover it. 00:32:52.810 [2024-11-20 14:52:04.509911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.810 [2024-11-20 14:52:04.509994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.810 [2024-11-20 14:52:04.510011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.810 [2024-11-20 14:52:04.510018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.810 [2024-11-20 14:52:04.510023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.810 [2024-11-20 14:52:04.510040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.810 qpair failed and we were unable to recover it. 00:32:52.810 [2024-11-20 14:52:04.519921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.810 [2024-11-20 14:52:04.519997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.810 [2024-11-20 14:52:04.520012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.810 [2024-11-20 14:52:04.520020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.810 [2024-11-20 14:52:04.520026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.810 [2024-11-20 14:52:04.520041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.810 qpair failed and we were unable to recover it. 00:32:52.810 [2024-11-20 14:52:04.529934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.810 [2024-11-20 14:52:04.529993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.530010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.530017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.530023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.530038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.539950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.540038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.540052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.540058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.540064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.540078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.549987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.550044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.550058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.550065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.550071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.550085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.559959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.560017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.560031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.560037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.560043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.560057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.570053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.570115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.570129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.570136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.570142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.570160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.580082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.580137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.580154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.580161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.580167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.580181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.590128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.590187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.590201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.590208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.590213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.590228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.600214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.600314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.600328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.600334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.600340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.600355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.610178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.610233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.610247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.610253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.610259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.610273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.620204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.620256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.620271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.620277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.620283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.620297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.630158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.630211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.630225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.630232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.630238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.630253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.640271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.640337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.640351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.640358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.640364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.640379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.650308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.650363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.650377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.650384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.650390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.650403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.660312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.660368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.660385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.660392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.660398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.660412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.670360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.670416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.670430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.670437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.670443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.670458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.680380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.680438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.680453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.680460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.680465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.680480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.690410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.690466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.690480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.690487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.690493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.690507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.700439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.700498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.700511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.700518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.700524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.700541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.710466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.710519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.710533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.710540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.710546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.710560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.720512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.720618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.720633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.720640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.720646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.720661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.730553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.730614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.730628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.730634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.730640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.730655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.740563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.740620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.740634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.740641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.740646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.740661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.750577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.811 [2024-11-20 14:52:04.750633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.811 [2024-11-20 14:52:04.750648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.811 [2024-11-20 14:52:04.750655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.811 [2024-11-20 14:52:04.750660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.811 [2024-11-20 14:52:04.750675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.811 qpair failed and we were unable to recover it. 00:32:52.811 [2024-11-20 14:52:04.760628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:52.812 [2024-11-20 14:52:04.760683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:52.812 [2024-11-20 14:52:04.760699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:52.812 [2024-11-20 14:52:04.760708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:52.812 [2024-11-20 14:52:04.760715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:52.812 [2024-11-20 14:52:04.760730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:52.812 qpair failed and we were unable to recover it. 00:32:53.071 [2024-11-20 14:52:04.770631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.072 [2024-11-20 14:52:04.770685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.072 [2024-11-20 14:52:04.770704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.072 [2024-11-20 14:52:04.770713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.072 [2024-11-20 14:52:04.770720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.072 [2024-11-20 14:52:04.770736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.072 qpair failed and we were unable to recover it. 00:32:53.072 [2024-11-20 14:52:04.780597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.072 [2024-11-20 14:52:04.780653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.072 [2024-11-20 14:52:04.780669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.072 [2024-11-20 14:52:04.780675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.072 [2024-11-20 14:52:04.780681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.072 [2024-11-20 14:52:04.780696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.072 qpair failed and we were unable to recover it. 00:32:53.072 [2024-11-20 14:52:04.790687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.072 [2024-11-20 14:52:04.790743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.072 [2024-11-20 14:52:04.790760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.072 [2024-11-20 14:52:04.790767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.072 [2024-11-20 14:52:04.790772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.072 [2024-11-20 14:52:04.790788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.072 qpair failed and we were unable to recover it. 00:32:53.072 [2024-11-20 14:52:04.800722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.072 [2024-11-20 14:52:04.800777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.072 [2024-11-20 14:52:04.800793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.072 [2024-11-20 14:52:04.800800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.072 [2024-11-20 14:52:04.800806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.072 [2024-11-20 14:52:04.800820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.072 qpair failed and we were unable to recover it. 00:32:53.072 [2024-11-20 14:52:04.810739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.072 [2024-11-20 14:52:04.810846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.072 [2024-11-20 14:52:04.810860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.072 [2024-11-20 14:52:04.810866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.072 [2024-11-20 14:52:04.810872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.072 [2024-11-20 14:52:04.810887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.072 qpair failed and we were unable to recover it. 00:32:53.072 [2024-11-20 14:52:04.820777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.072 [2024-11-20 14:52:04.820833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.072 [2024-11-20 14:52:04.820848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.072 [2024-11-20 14:52:04.820855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.072 [2024-11-20 14:52:04.820861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.072 [2024-11-20 14:52:04.820875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.072 qpair failed and we were unable to recover it. 00:32:53.072 [2024-11-20 14:52:04.830810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.072 [2024-11-20 14:52:04.830871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.072 [2024-11-20 14:52:04.830886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.072 [2024-11-20 14:52:04.830892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.072 [2024-11-20 14:52:04.830898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.072 [2024-11-20 14:52:04.830918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.072 qpair failed and we were unable to recover it. 00:32:53.072 [2024-11-20 14:52:04.840876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.072 [2024-11-20 14:52:04.840957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.072 [2024-11-20 14:52:04.840972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.072 [2024-11-20 14:52:04.840978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.072 [2024-11-20 14:52:04.840984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.072 [2024-11-20 14:52:04.840999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.072 qpair failed and we were unable to recover it. 00:32:53.072 [2024-11-20 14:52:04.850805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.072 [2024-11-20 14:52:04.850863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.072 [2024-11-20 14:52:04.850880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.072 [2024-11-20 14:52:04.850887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.072 [2024-11-20 14:52:04.850894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.072 [2024-11-20 14:52:04.850910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.072 qpair failed and we were unable to recover it. 00:32:53.072 [2024-11-20 14:52:04.860907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.072 [2024-11-20 14:52:04.860965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.072 [2024-11-20 14:52:04.860980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.072 [2024-11-20 14:52:04.860987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.072 [2024-11-20 14:52:04.860993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.072 [2024-11-20 14:52:04.861008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.072 qpair failed and we were unable to recover it. 00:32:53.072 [2024-11-20 14:52:04.870886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.072 [2024-11-20 14:52:04.870974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.072 [2024-11-20 14:52:04.870989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.072 [2024-11-20 14:52:04.870995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.072 [2024-11-20 14:52:04.871001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.072 [2024-11-20 14:52:04.871016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.072 qpair failed and we were unable to recover it. 00:32:53.072 [2024-11-20 14:52:04.881027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.072 [2024-11-20 14:52:04.881121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.072 [2024-11-20 14:52:04.881135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.072 [2024-11-20 14:52:04.881142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.072 [2024-11-20 14:52:04.881147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.072 [2024-11-20 14:52:04.881162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.072 qpair failed and we were unable to recover it. 00:32:53.072 [2024-11-20 14:52:04.891016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.072 [2024-11-20 14:52:04.891073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.072 [2024-11-20 14:52:04.891087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.072 [2024-11-20 14:52:04.891094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.072 [2024-11-20 14:52:04.891100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.072 [2024-11-20 14:52:04.891115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.072 qpair failed and we were unable to recover it. 00:32:53.073 [2024-11-20 14:52:04.901040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.073 [2024-11-20 14:52:04.901089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.073 [2024-11-20 14:52:04.901102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.073 [2024-11-20 14:52:04.901109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.073 [2024-11-20 14:52:04.901115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.073 [2024-11-20 14:52:04.901130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.073 qpair failed and we were unable to recover it. 00:32:53.073 [2024-11-20 14:52:04.911132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.073 [2024-11-20 14:52:04.911200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.073 [2024-11-20 14:52:04.911214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.073 [2024-11-20 14:52:04.911221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.073 [2024-11-20 14:52:04.911226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.073 [2024-11-20 14:52:04.911241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.073 qpair failed and we were unable to recover it. 00:32:53.073 [2024-11-20 14:52:04.921120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.073 [2024-11-20 14:52:04.921180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.073 [2024-11-20 14:52:04.921197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.073 [2024-11-20 14:52:04.921204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.073 [2024-11-20 14:52:04.921209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.073 [2024-11-20 14:52:04.921224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.073 qpair failed and we were unable to recover it. 00:32:53.073 [2024-11-20 14:52:04.931125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.073 [2024-11-20 14:52:04.931194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.073 [2024-11-20 14:52:04.931208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.073 [2024-11-20 14:52:04.931215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.073 [2024-11-20 14:52:04.931221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.073 [2024-11-20 14:52:04.931235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.073 qpair failed and we were unable to recover it. 00:32:53.073 [2024-11-20 14:52:04.941138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.073 [2024-11-20 14:52:04.941194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.073 [2024-11-20 14:52:04.941208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.073 [2024-11-20 14:52:04.941215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.073 [2024-11-20 14:52:04.941221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.073 [2024-11-20 14:52:04.941235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.073 qpair failed and we were unable to recover it. 00:32:53.073 [2024-11-20 14:52:04.951194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.073 [2024-11-20 14:52:04.951278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.073 [2024-11-20 14:52:04.951292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.073 [2024-11-20 14:52:04.951299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.073 [2024-11-20 14:52:04.951305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.073 [2024-11-20 14:52:04.951321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.073 qpair failed and we were unable to recover it. 00:32:53.073 [2024-11-20 14:52:04.961199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.073 [2024-11-20 14:52:04.961257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.073 [2024-11-20 14:52:04.961271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.073 [2024-11-20 14:52:04.961279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.073 [2024-11-20 14:52:04.961285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.073 [2024-11-20 14:52:04.961302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.073 qpair failed and we were unable to recover it. 00:32:53.073 [2024-11-20 14:52:04.971153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.073 [2024-11-20 14:52:04.971207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.073 [2024-11-20 14:52:04.971223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.073 [2024-11-20 14:52:04.971229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.073 [2024-11-20 14:52:04.971236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.073 [2024-11-20 14:52:04.971251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.073 qpair failed and we were unable to recover it. 00:32:53.073 [2024-11-20 14:52:04.981236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.073 [2024-11-20 14:52:04.981292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.073 [2024-11-20 14:52:04.981307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.073 [2024-11-20 14:52:04.981314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.073 [2024-11-20 14:52:04.981320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.073 [2024-11-20 14:52:04.981334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.073 qpair failed and we were unable to recover it. 00:32:53.073 [2024-11-20 14:52:04.991263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.073 [2024-11-20 14:52:04.991334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.073 [2024-11-20 14:52:04.991347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.073 [2024-11-20 14:52:04.991354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.073 [2024-11-20 14:52:04.991360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.073 [2024-11-20 14:52:04.991373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.073 qpair failed and we were unable to recover it. 00:32:53.073 [2024-11-20 14:52:05.001247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.073 [2024-11-20 14:52:05.001312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.073 [2024-11-20 14:52:05.001326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.073 [2024-11-20 14:52:05.001333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.073 [2024-11-20 14:52:05.001339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.073 [2024-11-20 14:52:05.001353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.073 qpair failed and we were unable to recover it. 00:32:53.073 [2024-11-20 14:52:05.011255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.073 [2024-11-20 14:52:05.011354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.073 [2024-11-20 14:52:05.011368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.073 [2024-11-20 14:52:05.011375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.073 [2024-11-20 14:52:05.011381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.073 [2024-11-20 14:52:05.011395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.073 qpair failed and we were unable to recover it. 00:32:53.073 [2024-11-20 14:52:05.021294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.073 [2024-11-20 14:52:05.021349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.073 [2024-11-20 14:52:05.021363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.073 [2024-11-20 14:52:05.021369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.073 [2024-11-20 14:52:05.021375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.073 [2024-11-20 14:52:05.021389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.073 qpair failed and we were unable to recover it. 00:32:53.333 [2024-11-20 14:52:05.031375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.333 [2024-11-20 14:52:05.031434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.333 [2024-11-20 14:52:05.031450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.333 [2024-11-20 14:52:05.031458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.333 [2024-11-20 14:52:05.031463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.333 [2024-11-20 14:52:05.031478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.333 qpair failed and we were unable to recover it. 00:32:53.333 [2024-11-20 14:52:05.041371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.333 [2024-11-20 14:52:05.041429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.334 [2024-11-20 14:52:05.041444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.334 [2024-11-20 14:52:05.041451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.334 [2024-11-20 14:52:05.041457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.334 [2024-11-20 14:52:05.041472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.334 qpair failed and we were unable to recover it. 00:32:53.334 [2024-11-20 14:52:05.051460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.334 [2024-11-20 14:52:05.051522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.334 [2024-11-20 14:52:05.051540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.334 [2024-11-20 14:52:05.051547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.334 [2024-11-20 14:52:05.051552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.334 [2024-11-20 14:52:05.051567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.334 qpair failed and we were unable to recover it. 00:32:53.334 [2024-11-20 14:52:05.061486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.334 [2024-11-20 14:52:05.061538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.334 [2024-11-20 14:52:05.061552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.334 [2024-11-20 14:52:05.061559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.334 [2024-11-20 14:52:05.061565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.334 [2024-11-20 14:52:05.061579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.334 qpair failed and we were unable to recover it. 00:32:53.334 [2024-11-20 14:52:05.071537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.334 [2024-11-20 14:52:05.071594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.334 [2024-11-20 14:52:05.071609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.334 [2024-11-20 14:52:05.071616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.334 [2024-11-20 14:52:05.071622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.334 [2024-11-20 14:52:05.071637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.334 qpair failed and we were unable to recover it. 00:32:53.334 [2024-11-20 14:52:05.081538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.334 [2024-11-20 14:52:05.081593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.334 [2024-11-20 14:52:05.081606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.334 [2024-11-20 14:52:05.081613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.334 [2024-11-20 14:52:05.081619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.334 [2024-11-20 14:52:05.081633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.334 qpair failed and we were unable to recover it. 00:32:53.334 [2024-11-20 14:52:05.091557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.334 [2024-11-20 14:52:05.091609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.334 [2024-11-20 14:52:05.091623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.334 [2024-11-20 14:52:05.091630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.334 [2024-11-20 14:52:05.091636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.334 [2024-11-20 14:52:05.091653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.334 qpair failed and we were unable to recover it. 00:32:53.334 [2024-11-20 14:52:05.101519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.334 [2024-11-20 14:52:05.101575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.334 [2024-11-20 14:52:05.101588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.334 [2024-11-20 14:52:05.101595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.334 [2024-11-20 14:52:05.101601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.334 [2024-11-20 14:52:05.101615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.334 qpair failed and we were unable to recover it. 00:32:53.334 [2024-11-20 14:52:05.111557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.334 [2024-11-20 14:52:05.111610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.334 [2024-11-20 14:52:05.111624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.334 [2024-11-20 14:52:05.111631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.334 [2024-11-20 14:52:05.111637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.334 [2024-11-20 14:52:05.111652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.334 qpair failed and we were unable to recover it. 00:32:53.334 [2024-11-20 14:52:05.121662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.334 [2024-11-20 14:52:05.121716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.334 [2024-11-20 14:52:05.121730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.334 [2024-11-20 14:52:05.121737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.334 [2024-11-20 14:52:05.121743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.334 [2024-11-20 14:52:05.121758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.334 qpair failed and we were unable to recover it. 00:32:53.334 [2024-11-20 14:52:05.131712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.334 [2024-11-20 14:52:05.131774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.334 [2024-11-20 14:52:05.131789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.334 [2024-11-20 14:52:05.131796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.334 [2024-11-20 14:52:05.131802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.334 [2024-11-20 14:52:05.131817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.334 qpair failed and we were unable to recover it. 00:32:53.334 [2024-11-20 14:52:05.141632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.334 [2024-11-20 14:52:05.141694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.334 [2024-11-20 14:52:05.141709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.334 [2024-11-20 14:52:05.141716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.334 [2024-11-20 14:52:05.141722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.334 [2024-11-20 14:52:05.141737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.334 qpair failed and we were unable to recover it. 00:32:53.334 [2024-11-20 14:52:05.151769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.334 [2024-11-20 14:52:05.151825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.334 [2024-11-20 14:52:05.151840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.334 [2024-11-20 14:52:05.151846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.334 [2024-11-20 14:52:05.151852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.334 [2024-11-20 14:52:05.151866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.334 qpair failed and we were unable to recover it. 00:32:53.334 [2024-11-20 14:52:05.161794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.334 [2024-11-20 14:52:05.161859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.334 [2024-11-20 14:52:05.161873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.334 [2024-11-20 14:52:05.161880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.334 [2024-11-20 14:52:05.161886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.334 [2024-11-20 14:52:05.161901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.334 qpair failed and we were unable to recover it. 00:32:53.334 [2024-11-20 14:52:05.171836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.334 [2024-11-20 14:52:05.171893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.334 [2024-11-20 14:52:05.171908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.335 [2024-11-20 14:52:05.171915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.335 [2024-11-20 14:52:05.171921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.335 [2024-11-20 14:52:05.171935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.335 qpair failed and we were unable to recover it. 00:32:53.335 [2024-11-20 14:52:05.181810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.335 [2024-11-20 14:52:05.181901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.335 [2024-11-20 14:52:05.181919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.335 [2024-11-20 14:52:05.181926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.335 [2024-11-20 14:52:05.181931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.335 [2024-11-20 14:52:05.181945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.335 qpair failed and we were unable to recover it. 00:32:53.335 [2024-11-20 14:52:05.191769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.335 [2024-11-20 14:52:05.191822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.335 [2024-11-20 14:52:05.191837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.335 [2024-11-20 14:52:05.191843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.335 [2024-11-20 14:52:05.191849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.335 [2024-11-20 14:52:05.191864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.335 qpair failed and we were unable to recover it. 00:32:53.335 [2024-11-20 14:52:05.201890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.335 [2024-11-20 14:52:05.201944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.335 [2024-11-20 14:52:05.201965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.335 [2024-11-20 14:52:05.201972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.335 [2024-11-20 14:52:05.201978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.335 [2024-11-20 14:52:05.201992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.335 qpair failed and we were unable to recover it. 00:32:53.335 [2024-11-20 14:52:05.211843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.335 [2024-11-20 14:52:05.211899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.335 [2024-11-20 14:52:05.211912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.335 [2024-11-20 14:52:05.211919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.335 [2024-11-20 14:52:05.211925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.335 [2024-11-20 14:52:05.211939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.335 qpair failed and we were unable to recover it. 00:32:53.335 [2024-11-20 14:52:05.221876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.335 [2024-11-20 14:52:05.221962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.335 [2024-11-20 14:52:05.221977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.335 [2024-11-20 14:52:05.221984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.335 [2024-11-20 14:52:05.221989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.335 [2024-11-20 14:52:05.222008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.335 qpair failed and we were unable to recover it. 00:32:53.335 [2024-11-20 14:52:05.231886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.335 [2024-11-20 14:52:05.231942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.335 [2024-11-20 14:52:05.231961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.335 [2024-11-20 14:52:05.231967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.335 [2024-11-20 14:52:05.231973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.335 [2024-11-20 14:52:05.231987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.335 qpair failed and we were unable to recover it. 00:32:53.335 [2024-11-20 14:52:05.242009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.335 [2024-11-20 14:52:05.242061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.335 [2024-11-20 14:52:05.242075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.335 [2024-11-20 14:52:05.242082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.335 [2024-11-20 14:52:05.242088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.335 [2024-11-20 14:52:05.242103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.335 qpair failed and we were unable to recover it. 00:32:53.335 [2024-11-20 14:52:05.252018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.335 [2024-11-20 14:52:05.252085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.335 [2024-11-20 14:52:05.252099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.335 [2024-11-20 14:52:05.252106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.335 [2024-11-20 14:52:05.252112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.335 [2024-11-20 14:52:05.252127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.335 qpair failed and we were unable to recover it. 00:32:53.335 [2024-11-20 14:52:05.262068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.335 [2024-11-20 14:52:05.262126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.335 [2024-11-20 14:52:05.262141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.335 [2024-11-20 14:52:05.262147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.335 [2024-11-20 14:52:05.262153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.335 [2024-11-20 14:52:05.262168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.335 qpair failed and we were unable to recover it. 00:32:53.335 [2024-11-20 14:52:05.272046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.335 [2024-11-20 14:52:05.272125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.335 [2024-11-20 14:52:05.272141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.335 [2024-11-20 14:52:05.272147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.335 [2024-11-20 14:52:05.272153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.335 [2024-11-20 14:52:05.272168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.335 qpair failed and we were unable to recover it. 00:32:53.335 [2024-11-20 14:52:05.282103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.335 [2024-11-20 14:52:05.282159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.335 [2024-11-20 14:52:05.282174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.335 [2024-11-20 14:52:05.282181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.335 [2024-11-20 14:52:05.282187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.335 [2024-11-20 14:52:05.282201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.335 qpair failed and we were unable to recover it. 00:32:53.596 [2024-11-20 14:52:05.292120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.596 [2024-11-20 14:52:05.292179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.596 [2024-11-20 14:52:05.292195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.596 [2024-11-20 14:52:05.292202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.596 [2024-11-20 14:52:05.292208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.596 [2024-11-20 14:52:05.292222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.596 qpair failed and we were unable to recover it. 00:32:53.596 [2024-11-20 14:52:05.302152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.596 [2024-11-20 14:52:05.302238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.596 [2024-11-20 14:52:05.302253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.596 [2024-11-20 14:52:05.302259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.596 [2024-11-20 14:52:05.302266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.596 [2024-11-20 14:52:05.302280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.596 qpair failed and we were unable to recover it. 00:32:53.596 [2024-11-20 14:52:05.312203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.596 [2024-11-20 14:52:05.312269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.596 [2024-11-20 14:52:05.312286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.596 [2024-11-20 14:52:05.312292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.596 [2024-11-20 14:52:05.312298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.596 [2024-11-20 14:52:05.312313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.596 qpair failed and we were unable to recover it. 00:32:53.596 [2024-11-20 14:52:05.322252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.596 [2024-11-20 14:52:05.322310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.596 [2024-11-20 14:52:05.322325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.596 [2024-11-20 14:52:05.322331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.596 [2024-11-20 14:52:05.322337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.596 [2024-11-20 14:52:05.322351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.596 qpair failed and we were unable to recover it. 00:32:53.596 [2024-11-20 14:52:05.332198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.596 [2024-11-20 14:52:05.332283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.596 [2024-11-20 14:52:05.332297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.596 [2024-11-20 14:52:05.332303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.596 [2024-11-20 14:52:05.332309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.596 [2024-11-20 14:52:05.332323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.596 qpair failed and we were unable to recover it. 00:32:53.596 [2024-11-20 14:52:05.342290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.596 [2024-11-20 14:52:05.342346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.596 [2024-11-20 14:52:05.342360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.596 [2024-11-20 14:52:05.342366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.596 [2024-11-20 14:52:05.342372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.596 [2024-11-20 14:52:05.342386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.596 qpair failed and we were unable to recover it. 00:32:53.596 [2024-11-20 14:52:05.352303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.596 [2024-11-20 14:52:05.352359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.596 [2024-11-20 14:52:05.352372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.596 [2024-11-20 14:52:05.352379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.596 [2024-11-20 14:52:05.352385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.596 [2024-11-20 14:52:05.352403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.596 qpair failed and we were unable to recover it. 00:32:53.596 [2024-11-20 14:52:05.362363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.596 [2024-11-20 14:52:05.362420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.596 [2024-11-20 14:52:05.362434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.596 [2024-11-20 14:52:05.362441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.596 [2024-11-20 14:52:05.362447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.596 [2024-11-20 14:52:05.362461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.596 qpair failed and we were unable to recover it. 00:32:53.596 [2024-11-20 14:52:05.372343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.597 [2024-11-20 14:52:05.372403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.597 [2024-11-20 14:52:05.372418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.597 [2024-11-20 14:52:05.372424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.597 [2024-11-20 14:52:05.372430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.597 [2024-11-20 14:52:05.372445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.597 qpair failed and we were unable to recover it. 00:32:53.597 [2024-11-20 14:52:05.382417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.597 [2024-11-20 14:52:05.382474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.597 [2024-11-20 14:52:05.382488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.597 [2024-11-20 14:52:05.382494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.597 [2024-11-20 14:52:05.382500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.597 [2024-11-20 14:52:05.382514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.597 qpair failed and we were unable to recover it. 00:32:53.597 [2024-11-20 14:52:05.392435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.597 [2024-11-20 14:52:05.392489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.597 [2024-11-20 14:52:05.392503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.597 [2024-11-20 14:52:05.392509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.597 [2024-11-20 14:52:05.392515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.597 [2024-11-20 14:52:05.392529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.597 qpair failed and we were unable to recover it. 00:32:53.597 [2024-11-20 14:52:05.402476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.597 [2024-11-20 14:52:05.402534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.597 [2024-11-20 14:52:05.402549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.597 [2024-11-20 14:52:05.402555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.597 [2024-11-20 14:52:05.402561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.597 [2024-11-20 14:52:05.402575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.597 qpair failed and we were unable to recover it. 00:32:53.597 [2024-11-20 14:52:05.412544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.597 [2024-11-20 14:52:05.412601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.597 [2024-11-20 14:52:05.412615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.597 [2024-11-20 14:52:05.412621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.597 [2024-11-20 14:52:05.412627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.597 [2024-11-20 14:52:05.412642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.597 qpair failed and we were unable to recover it. 00:32:53.597 [2024-11-20 14:52:05.422531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.597 [2024-11-20 14:52:05.422603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.597 [2024-11-20 14:52:05.422617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.597 [2024-11-20 14:52:05.422624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.597 [2024-11-20 14:52:05.422630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.597 [2024-11-20 14:52:05.422644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.597 qpair failed and we were unable to recover it. 00:32:53.597 [2024-11-20 14:52:05.432569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.597 [2024-11-20 14:52:05.432627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.597 [2024-11-20 14:52:05.432641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.597 [2024-11-20 14:52:05.432648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.597 [2024-11-20 14:52:05.432654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.597 [2024-11-20 14:52:05.432668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.597 qpair failed and we were unable to recover it. 00:32:53.597 [2024-11-20 14:52:05.442594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.597 [2024-11-20 14:52:05.442648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.597 [2024-11-20 14:52:05.442665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.597 [2024-11-20 14:52:05.442672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.597 [2024-11-20 14:52:05.442678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.597 [2024-11-20 14:52:05.442692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.597 qpair failed and we were unable to recover it. 00:32:53.597 [2024-11-20 14:52:05.452615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.597 [2024-11-20 14:52:05.452667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.597 [2024-11-20 14:52:05.452681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.597 [2024-11-20 14:52:05.452688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.597 [2024-11-20 14:52:05.452694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.597 [2024-11-20 14:52:05.452709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.597 qpair failed and we were unable to recover it. 00:32:53.597 [2024-11-20 14:52:05.462631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.597 [2024-11-20 14:52:05.462681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.597 [2024-11-20 14:52:05.462695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.597 [2024-11-20 14:52:05.462702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.597 [2024-11-20 14:52:05.462708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.597 [2024-11-20 14:52:05.462722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.597 qpair failed and we were unable to recover it. 00:32:53.597 [2024-11-20 14:52:05.472672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.597 [2024-11-20 14:52:05.472751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.597 [2024-11-20 14:52:05.472766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.597 [2024-11-20 14:52:05.472772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.597 [2024-11-20 14:52:05.472778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.597 [2024-11-20 14:52:05.472793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.597 qpair failed and we were unable to recover it. 00:32:53.597 [2024-11-20 14:52:05.482710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.597 [2024-11-20 14:52:05.482768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.597 [2024-11-20 14:52:05.482782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.597 [2024-11-20 14:52:05.482789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.597 [2024-11-20 14:52:05.482795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.597 [2024-11-20 14:52:05.482814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.597 qpair failed and we were unable to recover it. 00:32:53.597 [2024-11-20 14:52:05.492727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.597 [2024-11-20 14:52:05.492784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.597 [2024-11-20 14:52:05.492799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.597 [2024-11-20 14:52:05.492806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.597 [2024-11-20 14:52:05.492812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.597 [2024-11-20 14:52:05.492826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.597 qpair failed and we were unable to recover it. 00:32:53.597 [2024-11-20 14:52:05.502757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.597 [2024-11-20 14:52:05.502811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.597 [2024-11-20 14:52:05.502825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.598 [2024-11-20 14:52:05.502832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.598 [2024-11-20 14:52:05.502838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.598 [2024-11-20 14:52:05.502852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.598 qpair failed and we were unable to recover it. 00:32:53.598 [2024-11-20 14:52:05.512779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.598 [2024-11-20 14:52:05.512833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.598 [2024-11-20 14:52:05.512848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.598 [2024-11-20 14:52:05.512855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.598 [2024-11-20 14:52:05.512860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.598 [2024-11-20 14:52:05.512874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.598 qpair failed and we were unable to recover it. 00:32:53.598 [2024-11-20 14:52:05.522816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.598 [2024-11-20 14:52:05.522870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.598 [2024-11-20 14:52:05.522884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.598 [2024-11-20 14:52:05.522890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.598 [2024-11-20 14:52:05.522896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.598 [2024-11-20 14:52:05.522910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.598 qpair failed and we were unable to recover it. 00:32:53.598 [2024-11-20 14:52:05.532845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.598 [2024-11-20 14:52:05.532905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.598 [2024-11-20 14:52:05.532919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.598 [2024-11-20 14:52:05.532926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.598 [2024-11-20 14:52:05.532932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.598 [2024-11-20 14:52:05.532951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.598 qpair failed and we were unable to recover it. 00:32:53.598 [2024-11-20 14:52:05.542861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.598 [2024-11-20 14:52:05.542915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.598 [2024-11-20 14:52:05.542929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.598 [2024-11-20 14:52:05.542936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.598 [2024-11-20 14:52:05.542942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.598 [2024-11-20 14:52:05.542960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.598 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-20 14:52:05.552942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.858 [2024-11-20 14:52:05.552995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.858 [2024-11-20 14:52:05.553012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.858 [2024-11-20 14:52:05.553018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.858 [2024-11-20 14:52:05.553024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.858 [2024-11-20 14:52:05.553040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-20 14:52:05.562934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.858 [2024-11-20 14:52:05.563009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.858 [2024-11-20 14:52:05.563025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.858 [2024-11-20 14:52:05.563032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.858 [2024-11-20 14:52:05.563038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.858 [2024-11-20 14:52:05.563054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-20 14:52:05.572967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.858 [2024-11-20 14:52:05.573025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.858 [2024-11-20 14:52:05.573043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.858 [2024-11-20 14:52:05.573050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.858 [2024-11-20 14:52:05.573056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.858 [2024-11-20 14:52:05.573071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-20 14:52:05.583020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.858 [2024-11-20 14:52:05.583080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.858 [2024-11-20 14:52:05.583094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.858 [2024-11-20 14:52:05.583101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.858 [2024-11-20 14:52:05.583107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.858 [2024-11-20 14:52:05.583121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-20 14:52:05.593016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.858 [2024-11-20 14:52:05.593073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.858 [2024-11-20 14:52:05.593087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.858 [2024-11-20 14:52:05.593094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.858 [2024-11-20 14:52:05.593100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.858 [2024-11-20 14:52:05.593114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-20 14:52:05.603059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.858 [2024-11-20 14:52:05.603118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.858 [2024-11-20 14:52:05.603131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.858 [2024-11-20 14:52:05.603138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.858 [2024-11-20 14:52:05.603144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.858 [2024-11-20 14:52:05.603158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-20 14:52:05.613083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.858 [2024-11-20 14:52:05.613137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.858 [2024-11-20 14:52:05.613151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.858 [2024-11-20 14:52:05.613158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.858 [2024-11-20 14:52:05.613164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.858 [2024-11-20 14:52:05.613181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.858 qpair failed and we were unable to recover it. 00:32:53.858 [2024-11-20 14:52:05.623116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.859 [2024-11-20 14:52:05.623170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.859 [2024-11-20 14:52:05.623183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.859 [2024-11-20 14:52:05.623190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.859 [2024-11-20 14:52:05.623196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.859 [2024-11-20 14:52:05.623210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-20 14:52:05.633131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.859 [2024-11-20 14:52:05.633195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.859 [2024-11-20 14:52:05.633209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.859 [2024-11-20 14:52:05.633216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.859 [2024-11-20 14:52:05.633222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.859 [2024-11-20 14:52:05.633237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-20 14:52:05.643150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.859 [2024-11-20 14:52:05.643240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.859 [2024-11-20 14:52:05.643253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.859 [2024-11-20 14:52:05.643260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.859 [2024-11-20 14:52:05.643266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.859 [2024-11-20 14:52:05.643280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-20 14:52:05.653208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.859 [2024-11-20 14:52:05.653264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.859 [2024-11-20 14:52:05.653279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.859 [2024-11-20 14:52:05.653285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.859 [2024-11-20 14:52:05.653291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.859 [2024-11-20 14:52:05.653305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-20 14:52:05.663231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.859 [2024-11-20 14:52:05.663286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.859 [2024-11-20 14:52:05.663299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.859 [2024-11-20 14:52:05.663306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.859 [2024-11-20 14:52:05.663312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.859 [2024-11-20 14:52:05.663326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-20 14:52:05.673174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.859 [2024-11-20 14:52:05.673230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.859 [2024-11-20 14:52:05.673245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.859 [2024-11-20 14:52:05.673252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.859 [2024-11-20 14:52:05.673258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.859 [2024-11-20 14:52:05.673272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-20 14:52:05.683283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.859 [2024-11-20 14:52:05.683340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.859 [2024-11-20 14:52:05.683353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.859 [2024-11-20 14:52:05.683359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.859 [2024-11-20 14:52:05.683365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.859 [2024-11-20 14:52:05.683379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-20 14:52:05.693311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.859 [2024-11-20 14:52:05.693363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.859 [2024-11-20 14:52:05.693377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.859 [2024-11-20 14:52:05.693384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.859 [2024-11-20 14:52:05.693390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.859 [2024-11-20 14:52:05.693404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-20 14:52:05.703330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.859 [2024-11-20 14:52:05.703378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.859 [2024-11-20 14:52:05.703395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.859 [2024-11-20 14:52:05.703402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.859 [2024-11-20 14:52:05.703408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.859 [2024-11-20 14:52:05.703422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-20 14:52:05.713358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.859 [2024-11-20 14:52:05.713424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.859 [2024-11-20 14:52:05.713438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.859 [2024-11-20 14:52:05.713445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.859 [2024-11-20 14:52:05.713450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.859 [2024-11-20 14:52:05.713465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-20 14:52:05.723399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.859 [2024-11-20 14:52:05.723491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.859 [2024-11-20 14:52:05.723504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.859 [2024-11-20 14:52:05.723511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.859 [2024-11-20 14:52:05.723516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.859 [2024-11-20 14:52:05.723530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-20 14:52:05.733438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.859 [2024-11-20 14:52:05.733500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.859 [2024-11-20 14:52:05.733514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.859 [2024-11-20 14:52:05.733521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.859 [2024-11-20 14:52:05.733526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.859 [2024-11-20 14:52:05.733541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-20 14:52:05.743443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.859 [2024-11-20 14:52:05.743499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.859 [2024-11-20 14:52:05.743516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.859 [2024-11-20 14:52:05.743523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.859 [2024-11-20 14:52:05.743529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.859 [2024-11-20 14:52:05.743547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.859 qpair failed and we were unable to recover it. 00:32:53.859 [2024-11-20 14:52:05.753475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.860 [2024-11-20 14:52:05.753524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.860 [2024-11-20 14:52:05.753537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.860 [2024-11-20 14:52:05.753544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.860 [2024-11-20 14:52:05.753550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.860 [2024-11-20 14:52:05.753565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-20 14:52:05.763526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.860 [2024-11-20 14:52:05.763582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.860 [2024-11-20 14:52:05.763596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.860 [2024-11-20 14:52:05.763602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.860 [2024-11-20 14:52:05.763608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.860 [2024-11-20 14:52:05.763623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-20 14:52:05.773591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.860 [2024-11-20 14:52:05.773646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.860 [2024-11-20 14:52:05.773662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.860 [2024-11-20 14:52:05.773669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.860 [2024-11-20 14:52:05.773675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.860 [2024-11-20 14:52:05.773689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-20 14:52:05.783580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.860 [2024-11-20 14:52:05.783630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.860 [2024-11-20 14:52:05.783644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.860 [2024-11-20 14:52:05.783650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.860 [2024-11-20 14:52:05.783656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.860 [2024-11-20 14:52:05.783671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-20 14:52:05.793603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.860 [2024-11-20 14:52:05.793657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.860 [2024-11-20 14:52:05.793671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.860 [2024-11-20 14:52:05.793678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.860 [2024-11-20 14:52:05.793684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.860 [2024-11-20 14:52:05.793698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.860 qpair failed and we were unable to recover it. 00:32:53.860 [2024-11-20 14:52:05.803587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:53.860 [2024-11-20 14:52:05.803665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:53.860 [2024-11-20 14:52:05.803679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:53.860 [2024-11-20 14:52:05.803686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:53.860 [2024-11-20 14:52:05.803692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:53.860 [2024-11-20 14:52:05.803706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:53.860 qpair failed and we were unable to recover it. 00:32:54.119 [2024-11-20 14:52:05.813675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.119 [2024-11-20 14:52:05.813733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.119 [2024-11-20 14:52:05.813749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.119 [2024-11-20 14:52:05.813756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.119 [2024-11-20 14:52:05.813762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:54.119 [2024-11-20 14:52:05.813778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:54.119 qpair failed and we were unable to recover it. 00:32:54.119 [2024-11-20 14:52:05.823694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.119 [2024-11-20 14:52:05.823743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.119 [2024-11-20 14:52:05.823759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.119 [2024-11-20 14:52:05.823765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.119 [2024-11-20 14:52:05.823771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:54.119 [2024-11-20 14:52:05.823787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:54.119 qpair failed and we were unable to recover it. 00:32:54.120 [2024-11-20 14:52:05.833723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.120 [2024-11-20 14:52:05.833773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.120 [2024-11-20 14:52:05.833790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.120 [2024-11-20 14:52:05.833796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.120 [2024-11-20 14:52:05.833802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:54.120 [2024-11-20 14:52:05.833817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:54.120 qpair failed and we were unable to recover it. 00:32:54.120 [2024-11-20 14:52:05.843730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.120 [2024-11-20 14:52:05.843790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.120 [2024-11-20 14:52:05.843804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.120 [2024-11-20 14:52:05.843811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.120 [2024-11-20 14:52:05.843817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:54.120 [2024-11-20 14:52:05.843831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:54.120 qpair failed and we were unable to recover it. 00:32:54.120 [2024-11-20 14:52:05.853780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.120 [2024-11-20 14:52:05.853838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.120 [2024-11-20 14:52:05.853854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.120 [2024-11-20 14:52:05.853862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.120 [2024-11-20 14:52:05.853868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:54.120 [2024-11-20 14:52:05.853883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:54.120 qpair failed and we were unable to recover it. 00:32:54.120 [2024-11-20 14:52:05.863778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.120 [2024-11-20 14:52:05.863835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.120 [2024-11-20 14:52:05.863850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.120 [2024-11-20 14:52:05.863856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.120 [2024-11-20 14:52:05.863862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:54.120 [2024-11-20 14:52:05.863877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:54.120 qpair failed and we were unable to recover it. 00:32:54.120 [2024-11-20 14:52:05.873835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.120 [2024-11-20 14:52:05.873892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.120 [2024-11-20 14:52:05.873908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.120 [2024-11-20 14:52:05.873915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.120 [2024-11-20 14:52:05.873920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:54.120 [2024-11-20 14:52:05.873940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:54.120 qpair failed and we were unable to recover it. 00:32:54.120 [2024-11-20 14:52:05.883870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.120 [2024-11-20 14:52:05.883929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.120 [2024-11-20 14:52:05.883943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.120 [2024-11-20 14:52:05.883953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.120 [2024-11-20 14:52:05.883959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:54.120 [2024-11-20 14:52:05.883974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:54.120 qpair failed and we were unable to recover it. 00:32:54.120 [2024-11-20 14:52:05.893881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:54.120 [2024-11-20 14:52:05.893938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:54.120 [2024-11-20 14:52:05.893956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:54.120 [2024-11-20 14:52:05.893963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:54.120 [2024-11-20 14:52:05.893968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x59eba0 00:32:54.120 [2024-11-20 14:52:05.893984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:54.120 qpair failed and we were unable to recover it. 00:32:54.120 [2024-11-20 14:52:05.894074] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:32:54.120 A controller has encountered a failure and is being reset. 00:32:54.120 qpair failed and we were unable to recover it. 00:32:54.120 qpair failed and we were unable to recover it. 00:32:54.120 qpair failed and we were unable to recover it. 00:32:54.120 qpair failed and we were unable to recover it. 00:32:54.120 Controller properly reset. 00:32:54.120 Initializing NVMe Controllers 00:32:54.120 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:54.120 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:54.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:54.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:54.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:54.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:54.120 Initialization complete. Launching workers. 00:32:54.120 Starting thread on core 1 00:32:54.120 Starting thread on core 2 00:32:54.120 Starting thread on core 3 00:32:54.120 Starting thread on core 0 00:32:54.120 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:32:54.120 00:32:54.120 real 0m10.785s 00:32:54.120 user 0m19.636s 00:32:54.120 sys 0m4.671s 00:32:54.120 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:54.120 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:54.120 ************************************ 00:32:54.120 END TEST nvmf_target_disconnect_tc2 00:32:54.120 ************************************ 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:54.380 rmmod nvme_tcp 00:32:54.380 rmmod nvme_fabrics 00:32:54.380 rmmod nvme_keyring 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1756486 ']' 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1756486 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1756486 ']' 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1756486 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1756486 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1756486' 00:32:54.380 killing process with pid 1756486 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1756486 00:32:54.380 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1756486 00:32:54.639 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:54.639 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:54.640 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:54.640 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:32:54.640 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:32:54.640 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:54.640 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:32:54.640 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:54.640 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:54.640 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.640 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:54.640 14:52:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:56.545 14:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:56.545 00:32:56.545 real 0m19.517s 00:32:56.545 user 0m47.307s 00:32:56.545 sys 0m9.523s 00:32:56.545 14:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:56.545 14:52:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:56.545 ************************************ 00:32:56.545 END TEST nvmf_target_disconnect 00:32:56.545 ************************************ 00:32:56.805 14:52:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:56.805 00:32:56.805 real 5m53.378s 00:32:56.805 user 10m38.722s 00:32:56.805 sys 1m58.122s 00:32:56.805 14:52:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:56.805 14:52:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.805 ************************************ 00:32:56.805 END TEST nvmf_host 00:32:56.805 ************************************ 00:32:56.805 14:52:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:32:56.805 14:52:08 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:32:56.805 14:52:08 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:56.805 14:52:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:56.805 14:52:08 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:56.805 14:52:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:56.805 ************************************ 00:32:56.805 START TEST nvmf_target_core_interrupt_mode 00:32:56.805 ************************************ 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:56.805 * Looking for test storage... 00:32:56.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:56.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.805 --rc genhtml_branch_coverage=1 00:32:56.805 --rc genhtml_function_coverage=1 00:32:56.805 --rc genhtml_legend=1 00:32:56.805 --rc geninfo_all_blocks=1 00:32:56.805 --rc geninfo_unexecuted_blocks=1 00:32:56.805 00:32:56.805 ' 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:56.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.805 --rc genhtml_branch_coverage=1 00:32:56.805 --rc genhtml_function_coverage=1 00:32:56.805 --rc genhtml_legend=1 00:32:56.805 --rc geninfo_all_blocks=1 00:32:56.805 --rc geninfo_unexecuted_blocks=1 00:32:56.805 00:32:56.805 ' 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:56.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.805 --rc genhtml_branch_coverage=1 00:32:56.805 --rc genhtml_function_coverage=1 00:32:56.805 --rc genhtml_legend=1 00:32:56.805 --rc geninfo_all_blocks=1 00:32:56.805 --rc geninfo_unexecuted_blocks=1 00:32:56.805 00:32:56.805 ' 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:56.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.805 --rc genhtml_branch_coverage=1 00:32:56.805 --rc genhtml_function_coverage=1 00:32:56.805 --rc genhtml_legend=1 00:32:56.805 --rc geninfo_all_blocks=1 00:32:56.805 --rc geninfo_unexecuted_blocks=1 00:32:56.805 00:32:56.805 ' 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:56.805 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:57.066 ************************************ 00:32:57.066 START TEST nvmf_abort 00:32:57.066 ************************************ 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:57.066 * Looking for test storage... 00:32:57.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:57.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.066 --rc genhtml_branch_coverage=1 00:32:57.066 --rc genhtml_function_coverage=1 00:32:57.066 --rc genhtml_legend=1 00:32:57.066 --rc geninfo_all_blocks=1 00:32:57.066 --rc geninfo_unexecuted_blocks=1 00:32:57.066 00:32:57.066 ' 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:57.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.066 --rc genhtml_branch_coverage=1 00:32:57.066 --rc genhtml_function_coverage=1 00:32:57.066 --rc genhtml_legend=1 00:32:57.066 --rc geninfo_all_blocks=1 00:32:57.066 --rc geninfo_unexecuted_blocks=1 00:32:57.066 00:32:57.066 ' 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:57.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.066 --rc genhtml_branch_coverage=1 00:32:57.066 --rc genhtml_function_coverage=1 00:32:57.066 --rc genhtml_legend=1 00:32:57.066 --rc geninfo_all_blocks=1 00:32:57.066 --rc geninfo_unexecuted_blocks=1 00:32:57.066 00:32:57.066 ' 00:32:57.066 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:57.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.066 --rc genhtml_branch_coverage=1 00:32:57.066 --rc genhtml_function_coverage=1 00:32:57.066 --rc genhtml_legend=1 00:32:57.066 --rc geninfo_all_blocks=1 00:32:57.066 --rc geninfo_unexecuted_blocks=1 00:32:57.066 00:32:57.067 ' 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:32:57.067 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:03.636 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:03.636 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:03.636 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:03.637 Found net devices under 0000:86:00.0: cvl_0_0 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:03.637 Found net devices under 0000:86:00.1: cvl_0_1 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:03.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:03.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:33:03.637 00:33:03.637 --- 10.0.0.2 ping statistics --- 00:33:03.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.637 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:03.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:03.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:33:03.637 00:33:03.637 --- 10.0.0.1 ping statistics --- 00:33:03.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.637 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1760978 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1760978 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1760978 ']' 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:03.637 14:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:03.637 [2024-11-20 14:52:14.866693] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:03.637 [2024-11-20 14:52:14.867650] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:33:03.637 [2024-11-20 14:52:14.867683] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:03.637 [2024-11-20 14:52:14.948111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:03.637 [2024-11-20 14:52:14.988013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:03.638 [2024-11-20 14:52:14.988049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:03.638 [2024-11-20 14:52:14.988056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:03.638 [2024-11-20 14:52:14.988062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:03.638 [2024-11-20 14:52:14.988067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:03.638 [2024-11-20 14:52:14.989443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:03.638 [2024-11-20 14:52:14.989534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:03.638 [2024-11-20 14:52:14.989535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:03.638 [2024-11-20 14:52:15.057882] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:03.638 [2024-11-20 14:52:15.058795] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:03.638 [2024-11-20 14:52:15.059181] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:03.638 [2024-11-20 14:52:15.059288] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:03.638 [2024-11-20 14:52:15.138435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:03.638 Malloc0 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:03.638 Delay0 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:03.638 [2024-11-20 14:52:15.234401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.638 14:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:33:03.638 [2024-11-20 14:52:15.365654] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:05.539 Initializing NVMe Controllers 00:33:05.539 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:05.539 controller IO queue size 128 less than required 00:33:05.539 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:33:05.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:33:05.539 Initialization complete. Launching workers. 00:33:05.539 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37029 00:33:05.539 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37086, failed to submit 66 00:33:05.539 success 37029, unsuccessful 57, failed 0 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:05.539 rmmod nvme_tcp 00:33:05.539 rmmod nvme_fabrics 00:33:05.539 rmmod nvme_keyring 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1760978 ']' 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1760978 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1760978 ']' 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1760978 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:05.539 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1760978 00:33:05.797 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:05.797 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:05.797 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1760978' 00:33:05.797 killing process with pid 1760978 00:33:05.797 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1760978 00:33:05.797 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1760978 00:33:05.797 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:05.797 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:05.797 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:05.797 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:33:05.797 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:33:05.797 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:05.797 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:33:05.797 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:05.797 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:05.797 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.797 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.797 14:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:08.330 00:33:08.330 real 0m10.987s 00:33:08.330 user 0m10.125s 00:33:08.330 sys 0m5.652s 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:08.330 ************************************ 00:33:08.330 END TEST nvmf_abort 00:33:08.330 ************************************ 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:08.330 ************************************ 00:33:08.330 START TEST nvmf_ns_hotplug_stress 00:33:08.330 ************************************ 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:33:08.330 * Looking for test storage... 00:33:08.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:08.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.330 --rc genhtml_branch_coverage=1 00:33:08.330 --rc genhtml_function_coverage=1 00:33:08.330 --rc genhtml_legend=1 00:33:08.330 --rc geninfo_all_blocks=1 00:33:08.330 --rc geninfo_unexecuted_blocks=1 00:33:08.330 00:33:08.330 ' 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:08.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.330 --rc genhtml_branch_coverage=1 00:33:08.330 --rc genhtml_function_coverage=1 00:33:08.330 --rc genhtml_legend=1 00:33:08.330 --rc geninfo_all_blocks=1 00:33:08.330 --rc geninfo_unexecuted_blocks=1 00:33:08.330 00:33:08.330 ' 00:33:08.330 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:08.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.330 --rc genhtml_branch_coverage=1 00:33:08.331 --rc genhtml_function_coverage=1 00:33:08.331 --rc genhtml_legend=1 00:33:08.331 --rc geninfo_all_blocks=1 00:33:08.331 --rc geninfo_unexecuted_blocks=1 00:33:08.331 00:33:08.331 ' 00:33:08.331 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:08.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.331 --rc genhtml_branch_coverage=1 00:33:08.331 --rc genhtml_function_coverage=1 00:33:08.331 --rc genhtml_legend=1 00:33:08.331 --rc geninfo_all_blocks=1 00:33:08.331 --rc geninfo_unexecuted_blocks=1 00:33:08.331 00:33:08.331 ' 00:33:08.331 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:33:08.331 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:13.678 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:13.678 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:33:13.678 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:13.678 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:13.678 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:13.678 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:13.678 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:13.678 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:33:13.678 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:13.678 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:33:13.678 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:33:13.678 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:33:13.678 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:33:13.678 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:33:13.678 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:33:13.678 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:13.678 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:13.938 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:13.938 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:13.938 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:13.938 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:13.938 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:13.938 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:13.938 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:13.939 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:13.939 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:13.939 Found net devices under 0000:86:00.0: cvl_0_0 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:13.939 Found net devices under 0000:86:00.1: cvl_0_1 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:13.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:13.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:33:13.939 00:33:13.939 --- 10.0.0.2 ping statistics --- 00:33:13.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.939 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:13.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:13.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:33:13.939 00:33:13.939 --- 10.0.0.1 ping statistics --- 00:33:13.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.939 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:13.939 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:14.199 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:33:14.199 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:14.199 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:14.199 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:14.199 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1764941 00:33:14.199 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1764941 00:33:14.199 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:33:14.199 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1764941 ']' 00:33:14.199 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.199 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:14.199 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.199 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:14.199 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:14.199 [2024-11-20 14:52:25.977689] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:14.199 [2024-11-20 14:52:25.978709] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:33:14.199 [2024-11-20 14:52:25.978752] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:14.199 [2024-11-20 14:52:26.056223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:14.199 [2024-11-20 14:52:26.098498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:14.199 [2024-11-20 14:52:26.098536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:14.199 [2024-11-20 14:52:26.098543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:14.199 [2024-11-20 14:52:26.098550] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:14.199 [2024-11-20 14:52:26.098555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:14.199 [2024-11-20 14:52:26.099924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:14.199 [2024-11-20 14:52:26.100031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:14.199 [2024-11-20 14:52:26.100031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:14.458 [2024-11-20 14:52:26.169412] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:14.458 [2024-11-20 14:52:26.170257] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:14.458 [2024-11-20 14:52:26.170470] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:14.458 [2024-11-20 14:52:26.170619] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:14.458 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:14.458 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:33:14.458 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:14.458 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:14.458 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:14.458 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:14.458 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:33:14.458 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:14.458 [2024-11-20 14:52:26.400820] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:14.718 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:14.718 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:14.977 [2024-11-20 14:52:26.805286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:14.978 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:15.237 14:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:33:15.496 Malloc0 00:33:15.496 14:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:15.496 Delay0 00:33:15.753 14:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:15.753 14:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:33:16.011 NULL1 00:33:16.012 14:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:33:16.270 14:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1765200 00:33:16.270 14:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:33:16.270 14:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:16.270 14:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:17.645 Read completed with error (sct=0, sc=11) 00:33:17.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:17.645 14:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:17.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:17.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:17.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:17.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:17.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:17.645 14:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:33:17.645 14:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:33:17.903 true 00:33:17.903 14:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:17.903 14:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:18.838 14:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:18.838 14:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:33:18.838 14:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:33:19.097 true 00:33:19.097 14:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:19.097 14:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:19.356 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:19.356 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:33:19.356 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:33:19.615 true 00:33:19.615 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:19.615 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:20.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.550 14:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:20.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:20.809 14:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:33:20.809 14:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:33:21.067 true 00:33:21.067 14:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:21.067 14:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:22.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:22.004 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:22.004 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:33:22.004 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:33:22.262 true 00:33:22.262 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:22.262 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:22.521 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:22.780 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:33:22.780 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:33:22.780 true 00:33:23.038 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:23.038 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:23.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:23.975 14:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:23.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:23.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:23.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:23.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:24.234 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:24.234 14:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:33:24.234 14:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:33:24.493 true 00:33:24.493 14:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:24.493 14:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:25.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:25.320 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:25.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:25.320 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:33:25.320 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:33:25.579 true 00:33:25.579 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:25.579 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:25.838 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:26.097 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:33:26.097 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:33:26.097 true 00:33:26.097 14:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:26.097 14:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:27.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:27.471 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:27.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:27.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:27.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:27.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:27.471 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:27.471 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:33:27.471 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:33:27.741 true 00:33:27.741 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:27.741 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:28.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:28.675 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:28.675 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:33:28.675 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:33:28.934 true 00:33:28.934 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:28.934 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:29.193 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:29.452 14:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:33:29.452 14:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:33:29.452 true 00:33:29.452 14:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:29.452 14:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:30.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:30.828 14:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:30.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:30.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:30.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:30.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:30.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:30.828 14:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:33:30.828 14:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:33:31.087 true 00:33:31.087 14:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:31.087 14:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:32.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:32.022 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:32.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:32.022 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:33:32.022 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:33:32.281 true 00:33:32.281 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:32.281 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:32.540 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:32.799 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:33:32.799 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:33:32.799 true 00:33:32.799 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:32.799 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:34.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:34.175 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:34.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:34.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:34.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:34.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:34.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:34.175 14:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:33:34.175 14:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:33:34.434 true 00:33:34.434 14:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:34.434 14:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:35.368 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:35.368 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:33:35.368 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:33:35.626 true 00:33:35.626 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:35.626 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:35.884 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:36.142 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:33:36.142 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:33:36.142 true 00:33:36.142 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:36.142 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:37.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:37.514 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:37.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:37.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:37.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:37.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:37.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:37.514 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:33:37.514 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:33:37.772 true 00:33:37.772 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:37.772 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:38.707 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:38.707 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:38.707 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:33:38.707 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:33:38.965 true 00:33:38.965 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:38.965 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:38.965 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:39.224 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:33:39.224 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:33:39.482 true 00:33:39.482 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:39.482 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:40.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.856 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:40.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.856 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:40.856 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:33:40.856 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:33:41.115 true 00:33:41.115 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:41.115 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:42.049 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:42.050 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:33:42.050 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:33:42.309 true 00:33:42.309 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:42.309 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:42.567 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:42.567 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:33:42.567 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:33:42.825 true 00:33:42.825 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:42.825 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:44.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:44.200 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:44.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:44.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:44.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:44.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:44.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:44.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:44.200 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:33:44.200 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:33:44.458 true 00:33:44.458 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:44.458 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:45.393 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:45.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:33:45.393 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:33:45.393 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:33:45.650 true 00:33:45.650 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:45.650 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:45.909 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:45.909 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:33:45.909 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:33:46.167 true 00:33:46.167 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:46.167 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:47.103 Initializing NVMe Controllers 00:33:47.103 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:47.104 Controller IO queue size 128, less than required. 00:33:47.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:47.104 Controller IO queue size 128, less than required. 00:33:47.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:47.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:47.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:47.104 Initialization complete. Launching workers. 00:33:47.104 ======================================================== 00:33:47.104 Latency(us) 00:33:47.104 Device Information : IOPS MiB/s Average min max 00:33:47.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2261.17 1.10 41373.31 2411.99 1025701.79 00:33:47.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18023.50 8.80 7101.46 1996.84 380988.92 00:33:47.104 ======================================================== 00:33:47.104 Total : 20284.67 9.90 10921.80 1996.84 1025701.79 00:33:47.104 00:33:47.361 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:47.361 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:33:47.361 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:33:47.618 true 00:33:47.618 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1765200 00:33:47.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1765200) - No such process 00:33:47.618 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1765200 00:33:47.618 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:47.876 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:48.136 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:33:48.136 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:33:48.136 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:33:48.136 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:48.136 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:33:48.136 null0 00:33:48.136 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:48.136 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:48.136 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:33:48.395 null1 00:33:48.395 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:48.395 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:48.395 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:33:48.654 null2 00:33:48.654 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:48.654 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:48.654 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:33:48.913 null3 00:33:48.913 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:48.913 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:48.913 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:33:48.913 null4 00:33:48.913 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:48.913 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:48.913 14:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:33:49.172 null5 00:33:49.172 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:49.172 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:49.172 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:33:49.432 null6 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:33:49.432 null7 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:49.432 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1770559 1770561 1770564 1770568 1770569 1770572 1770575 1770578 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:49.691 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:49.950 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:50.208 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:50.208 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:50.208 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:50.209 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:50.209 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:50.209 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:50.209 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:50.209 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:50.466 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.466 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.466 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:50.466 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.466 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.466 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:50.466 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.467 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.467 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:50.467 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.467 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.467 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:50.467 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.467 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.467 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:50.467 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.467 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.467 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:50.467 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.467 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.467 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:50.467 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.467 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.467 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:50.467 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:50.726 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:50.985 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:50.985 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:50.985 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:50.985 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:50.985 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:50.985 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:50.985 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:50.985 14:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.244 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:51.503 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:51.503 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:51.503 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:51.503 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:51.503 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:51.503 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:51.503 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:51.503 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:51.761 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.020 14:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:52.278 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:52.278 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:52.278 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:52.278 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:52.278 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:52.278 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:52.278 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:52.278 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.536 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:52.794 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:52.794 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:52.794 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:52.794 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:52.794 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:52.794 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:52.794 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:52.794 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:52.794 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.795 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.795 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:52.795 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:52.795 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:52.795 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:53.054 14:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.313 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:53.571 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:53.571 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:53.571 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:53.571 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:53.571 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:53.571 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:53.571 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:53.571 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:53.829 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.829 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.829 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.829 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.829 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.829 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.829 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.829 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.829 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.829 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.829 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.829 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.829 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:53.830 rmmod nvme_tcp 00:33:53.830 rmmod nvme_fabrics 00:33:53.830 rmmod nvme_keyring 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1764941 ']' 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1764941 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1764941 ']' 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1764941 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1764941 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1764941' 00:33:53.830 killing process with pid 1764941 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1764941 00:33:53.830 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1764941 00:33:54.090 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:54.090 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:54.090 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:54.090 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:33:54.090 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:33:54.090 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:54.090 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:33:54.090 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:54.090 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:54.090 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.090 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:54.090 14:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.627 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:56.627 00:33:56.627 real 0m48.161s 00:33:56.627 user 2m59.805s 00:33:56.627 sys 0m20.459s 00:33:56.627 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:56.627 14:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:56.627 ************************************ 00:33:56.627 END TEST nvmf_ns_hotplug_stress 00:33:56.627 ************************************ 00:33:56.627 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:56.627 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:56.627 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:56.628 ************************************ 00:33:56.628 START TEST nvmf_delete_subsystem 00:33:56.628 ************************************ 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:56.628 * Looking for test storage... 00:33:56.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:56.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.628 --rc genhtml_branch_coverage=1 00:33:56.628 --rc genhtml_function_coverage=1 00:33:56.628 --rc genhtml_legend=1 00:33:56.628 --rc geninfo_all_blocks=1 00:33:56.628 --rc geninfo_unexecuted_blocks=1 00:33:56.628 00:33:56.628 ' 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:56.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.628 --rc genhtml_branch_coverage=1 00:33:56.628 --rc genhtml_function_coverage=1 00:33:56.628 --rc genhtml_legend=1 00:33:56.628 --rc geninfo_all_blocks=1 00:33:56.628 --rc geninfo_unexecuted_blocks=1 00:33:56.628 00:33:56.628 ' 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:56.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.628 --rc genhtml_branch_coverage=1 00:33:56.628 --rc genhtml_function_coverage=1 00:33:56.628 --rc genhtml_legend=1 00:33:56.628 --rc geninfo_all_blocks=1 00:33:56.628 --rc geninfo_unexecuted_blocks=1 00:33:56.628 00:33:56.628 ' 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:56.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.628 --rc genhtml_branch_coverage=1 00:33:56.628 --rc genhtml_function_coverage=1 00:33:56.628 --rc genhtml_legend=1 00:33:56.628 --rc geninfo_all_blocks=1 00:33:56.628 --rc geninfo_unexecuted_blocks=1 00:33:56.628 00:33:56.628 ' 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:56.628 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:33:56.629 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:01.907 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:01.907 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:01.907 Found net devices under 0000:86:00.0: cvl_0_0 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:01.907 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:01.908 Found net devices under 0000:86:00.1: cvl_0_1 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:01.908 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:02.200 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:02.200 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:02.201 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:02.201 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:02.201 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:02.201 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:02.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:02.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:34:02.201 00:34:02.201 --- 10.0.0.2 ping statistics --- 00:34:02.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:02.201 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:02.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:02.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:34:02.201 00:34:02.201 --- 10.0.0.1 ping statistics --- 00:34:02.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:02.201 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1774784 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1774784 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1774784 ']' 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:02.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:02.201 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:02.485 [2024-11-20 14:53:14.176159] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:02.485 [2024-11-20 14:53:14.177113] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:34:02.485 [2024-11-20 14:53:14.177147] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:02.485 [2024-11-20 14:53:14.257756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:02.485 [2024-11-20 14:53:14.299745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:02.485 [2024-11-20 14:53:14.299781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:02.485 [2024-11-20 14:53:14.299789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:02.485 [2024-11-20 14:53:14.299794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:02.485 [2024-11-20 14:53:14.299799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:02.485 [2024-11-20 14:53:14.301027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:02.485 [2024-11-20 14:53:14.301028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.485 [2024-11-20 14:53:14.369236] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:02.485 [2024-11-20 14:53:14.369882] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:02.485 [2024-11-20 14:53:14.370114] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:02.485 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:02.485 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:34:02.485 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:02.485 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:02.486 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:02.486 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:02.486 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:02.486 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.486 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:02.812 [2024-11-20 14:53:14.437844] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:02.812 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.812 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:02.812 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.812 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:02.812 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.812 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:02.812 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.812 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:02.813 [2024-11-20 14:53:14.466212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.813 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.813 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:34:02.813 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.813 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:02.813 NULL1 00:34:02.813 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.813 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:02.813 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.813 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:02.813 Delay0 00:34:02.813 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.813 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:02.813 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.813 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:02.813 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.813 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1774997 00:34:02.813 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:34:02.813 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:34:02.813 [2024-11-20 14:53:14.579160] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:34:04.713 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:04.713 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.713 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:04.971 Read completed with error (sct=0, sc=8) 00:34:04.971 Read completed with error (sct=0, sc=8) 00:34:04.971 Read completed with error (sct=0, sc=8) 00:34:04.971 starting I/O failed: -6 00:34:04.971 Write completed with error (sct=0, sc=8) 00:34:04.971 Read completed with error (sct=0, sc=8) 00:34:04.971 Read completed with error (sct=0, sc=8) 00:34:04.971 Write completed with error (sct=0, sc=8) 00:34:04.971 starting I/O failed: -6 00:34:04.971 Read completed with error (sct=0, sc=8) 00:34:04.971 Read completed with error (sct=0, sc=8) 00:34:04.971 Read completed with error (sct=0, sc=8) 00:34:04.971 Read completed with error (sct=0, sc=8) 00:34:04.971 starting I/O failed: -6 00:34:04.971 Read completed with error (sct=0, sc=8) 00:34:04.971 Write completed with error (sct=0, sc=8) 00:34:04.971 Write completed with error (sct=0, sc=8) 00:34:04.971 Read completed with error (sct=0, sc=8) 00:34:04.971 starting I/O failed: -6 00:34:04.971 Read completed with error (sct=0, sc=8) 00:34:04.971 Write completed with error (sct=0, sc=8) 00:34:04.971 Read completed with error (sct=0, sc=8) 00:34:04.971 Read completed with error (sct=0, sc=8) 00:34:04.971 starting I/O failed: -6 00:34:04.971 Write completed with error (sct=0, sc=8) 00:34:04.971 Read completed with error (sct=0, sc=8) 00:34:04.971 Write completed with error (sct=0, sc=8) 00:34:04.971 Read completed with error (sct=0, sc=8) 00:34:04.971 starting I/O failed: -6 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 starting I/O failed: -6 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 starting I/O failed: -6 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 starting I/O failed: -6 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 starting I/O failed: -6 00:34:04.972 [2024-11-20 14:53:16.831520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11304a0 is same with the state(6) to be set 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 starting I/O failed: -6 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 starting I/O failed: -6 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 starting I/O failed: -6 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 starting I/O failed: -6 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 starting I/O failed: -6 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 starting I/O failed: -6 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 starting I/O failed: -6 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 starting I/O failed: -6 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 starting I/O failed: -6 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 starting I/O failed: -6 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Read completed with error (sct=0, sc=8) 00:34:04.972 Write completed with error (sct=0, sc=8) 00:34:04.972 [2024-11-20 14:53:16.834177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5550000c40 is same with the state(6) to be set 00:34:05.908 [2024-11-20 14:53:17.798989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11319a0 is same with the state(6) to be set 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 [2024-11-20 14:53:17.834700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11302c0 is same with the state(6) to be set 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 [2024-11-20 14:53:17.836982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f555000d350 is same with the state(6) to be set 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 [2024-11-20 14:53:17.837200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f555000d020 is same with the state(6) to be set 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.908 Read completed with error (sct=0, sc=8) 00:34:05.908 Write completed with error (sct=0, sc=8) 00:34:05.909 Read completed with error (sct=0, sc=8) 00:34:05.909 Read completed with error (sct=0, sc=8) 00:34:05.909 Read completed with error (sct=0, sc=8) 00:34:05.909 Read completed with error (sct=0, sc=8) 00:34:05.909 Write completed with error (sct=0, sc=8) 00:34:05.909 Read completed with error (sct=0, sc=8) 00:34:05.909 Read completed with error (sct=0, sc=8) 00:34:05.909 Read completed with error (sct=0, sc=8) 00:34:05.909 Write completed with error (sct=0, sc=8) 00:34:05.909 Write completed with error (sct=0, sc=8) 00:34:05.909 Read completed with error (sct=0, sc=8) 00:34:05.909 Read completed with error (sct=0, sc=8) 00:34:05.909 Write completed with error (sct=0, sc=8) 00:34:05.909 Read completed with error (sct=0, sc=8) 00:34:05.909 Read completed with error (sct=0, sc=8) 00:34:05.909 Read completed with error (sct=0, sc=8) 00:34:05.909 Read completed with error (sct=0, sc=8) 00:34:05.909 Read completed with error (sct=0, sc=8) 00:34:05.909 [2024-11-20 14:53:17.837555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f555000d7e0 is same with the state(6) to be set 00:34:05.909 Initializing NVMe Controllers 00:34:05.909 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:05.909 Controller IO queue size 128, less than required. 00:34:05.909 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:05.909 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:05.909 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:05.909 Initialization complete. Launching workers. 00:34:05.909 ======================================================== 00:34:05.909 Latency(us) 00:34:05.909 Device Information : IOPS MiB/s Average min max 00:34:05.909 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 155.12 0.08 879248.13 289.87 1008890.47 00:34:05.909 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.07 0.08 1035414.48 498.44 2001715.46 00:34:05.909 ======================================================== 00:34:05.909 Total : 319.20 0.16 959520.56 289.87 2001715.46 00:34:05.909 00:34:05.909 [2024-11-20 14:53:17.838110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11319a0 (9): Bad file descriptor 00:34:05.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:34:05.909 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.909 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:34:05.909 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1774997 00:34:05.909 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:34:06.477 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:34:06.477 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1774997 00:34:06.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1774997) - No such process 00:34:06.477 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1774997 00:34:06.477 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:34:06.477 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1774997 00:34:06.477 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:34:06.477 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:06.477 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:34:06.477 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:06.477 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1774997 00:34:06.477 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:34:06.477 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:06.477 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:06.478 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:06.478 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:06.478 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.478 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:06.478 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.478 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:06.478 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.478 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:06.478 [2024-11-20 14:53:18.366056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:06.478 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.478 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:06.478 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.478 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:06.478 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.478 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1775485 00:34:06.478 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:34:06.478 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:34:06.478 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1775485 00:34:06.478 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:06.736 [2024-11-20 14:53:18.448701] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:34:06.994 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:06.994 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1775485 00:34:06.994 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:07.560 14:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:07.560 14:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1775485 00:34:07.560 14:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:08.126 14:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:08.126 14:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1775485 00:34:08.126 14:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:08.692 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:08.692 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1775485 00:34:08.692 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:08.950 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:08.950 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1775485 00:34:08.950 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:09.517 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:09.517 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1775485 00:34:09.517 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:09.776 Initializing NVMe Controllers 00:34:09.776 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:09.776 Controller IO queue size 128, less than required. 00:34:09.776 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:09.776 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:09.776 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:09.776 Initialization complete. Launching workers. 00:34:09.776 ======================================================== 00:34:09.776 Latency(us) 00:34:09.776 Device Information : IOPS MiB/s Average min max 00:34:09.776 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002361.77 1000210.02 1006438.82 00:34:09.776 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003870.79 1000191.51 1010565.83 00:34:09.776 ======================================================== 00:34:09.776 Total : 256.00 0.12 1003116.28 1000191.51 1010565.83 00:34:09.776 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1775485 00:34:10.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1775485) - No such process 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1775485 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:10.036 rmmod nvme_tcp 00:34:10.036 rmmod nvme_fabrics 00:34:10.036 rmmod nvme_keyring 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1774784 ']' 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1774784 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1774784 ']' 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1774784 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:10.036 14:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1774784 00:34:10.295 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:10.295 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:10.295 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1774784' 00:34:10.295 killing process with pid 1774784 00:34:10.295 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1774784 00:34:10.295 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1774784 00:34:10.295 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:10.295 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:10.295 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:10.295 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:34:10.296 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:34:10.296 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:10.296 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:34:10.296 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:10.296 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:10.296 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.296 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.296 14:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:12.832 00:34:12.832 real 0m16.240s 00:34:12.832 user 0m26.310s 00:34:12.832 sys 0m6.270s 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:12.832 ************************************ 00:34:12.832 END TEST nvmf_delete_subsystem 00:34:12.832 ************************************ 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:12.832 ************************************ 00:34:12.832 START TEST nvmf_host_management 00:34:12.832 ************************************ 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:34:12.832 * Looking for test storage... 00:34:12.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:34:12.832 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:12.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.833 --rc genhtml_branch_coverage=1 00:34:12.833 --rc genhtml_function_coverage=1 00:34:12.833 --rc genhtml_legend=1 00:34:12.833 --rc geninfo_all_blocks=1 00:34:12.833 --rc geninfo_unexecuted_blocks=1 00:34:12.833 00:34:12.833 ' 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:12.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.833 --rc genhtml_branch_coverage=1 00:34:12.833 --rc genhtml_function_coverage=1 00:34:12.833 --rc genhtml_legend=1 00:34:12.833 --rc geninfo_all_blocks=1 00:34:12.833 --rc geninfo_unexecuted_blocks=1 00:34:12.833 00:34:12.833 ' 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:12.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.833 --rc genhtml_branch_coverage=1 00:34:12.833 --rc genhtml_function_coverage=1 00:34:12.833 --rc genhtml_legend=1 00:34:12.833 --rc geninfo_all_blocks=1 00:34:12.833 --rc geninfo_unexecuted_blocks=1 00:34:12.833 00:34:12.833 ' 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:12.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.833 --rc genhtml_branch_coverage=1 00:34:12.833 --rc genhtml_function_coverage=1 00:34:12.833 --rc genhtml_legend=1 00:34:12.833 --rc geninfo_all_blocks=1 00:34:12.833 --rc geninfo_unexecuted_blocks=1 00:34:12.833 00:34:12.833 ' 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:12.833 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.834 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.834 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.834 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:12.834 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:12.834 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:34:12.834 14:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:19.405 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:19.405 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:19.405 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:19.406 Found net devices under 0000:86:00.0: cvl_0_0 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:19.406 Found net devices under 0000:86:00.1: cvl_0_1 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:19.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:19.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:34:19.406 00:34:19.406 --- 10.0.0.2 ping statistics --- 00:34:19.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.406 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:19.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:19.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:34:19.406 00:34:19.406 --- 10.0.0.1 ping statistics --- 00:34:19.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.406 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1779609 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1779609 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1779609 ']' 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:19.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:19.406 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:19.406 [2024-11-20 14:53:30.475021] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:19.406 [2024-11-20 14:53:30.476035] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:34:19.406 [2024-11-20 14:53:30.476076] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:19.406 [2024-11-20 14:53:30.557374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:19.406 [2024-11-20 14:53:30.599182] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.406 [2024-11-20 14:53:30.599219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.406 [2024-11-20 14:53:30.599227] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.406 [2024-11-20 14:53:30.599233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.406 [2024-11-20 14:53:30.599238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.406 [2024-11-20 14:53:30.600753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:19.406 [2024-11-20 14:53:30.600862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:19.407 [2024-11-20 14:53:30.600892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:19.407 [2024-11-20 14:53:30.600894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:19.407 [2024-11-20 14:53:30.670134] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:19.407 [2024-11-20 14:53:30.671283] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:19.407 [2024-11-20 14:53:30.671392] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:19.407 [2024-11-20 14:53:30.671745] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:19.407 [2024-11-20 14:53:30.671776] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:19.407 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:19.407 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:34:19.407 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:19.407 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:19.407 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:19.407 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:19.407 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:19.407 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.407 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:19.407 [2024-11-20 14:53:31.357763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:19.666 Malloc0 00:34:19.666 [2024-11-20 14:53:31.450027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1779711 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1779711 /var/tmp/bdevperf.sock 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1779711 ']' 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:19.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:19.666 { 00:34:19.666 "params": { 00:34:19.666 "name": "Nvme$subsystem", 00:34:19.666 "trtype": "$TEST_TRANSPORT", 00:34:19.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:19.666 "adrfam": "ipv4", 00:34:19.666 "trsvcid": "$NVMF_PORT", 00:34:19.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:19.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:19.666 "hdgst": ${hdgst:-false}, 00:34:19.666 "ddgst": ${ddgst:-false} 00:34:19.666 }, 00:34:19.666 "method": "bdev_nvme_attach_controller" 00:34:19.666 } 00:34:19.666 EOF 00:34:19.666 )") 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:34:19.666 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:19.666 "params": { 00:34:19.666 "name": "Nvme0", 00:34:19.666 "trtype": "tcp", 00:34:19.666 "traddr": "10.0.0.2", 00:34:19.666 "adrfam": "ipv4", 00:34:19.666 "trsvcid": "4420", 00:34:19.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:19.666 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:19.666 "hdgst": false, 00:34:19.666 "ddgst": false 00:34:19.666 }, 00:34:19.666 "method": "bdev_nvme_attach_controller" 00:34:19.666 }' 00:34:19.666 [2024-11-20 14:53:31.544301] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:34:19.666 [2024-11-20 14:53:31.544352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1779711 ] 00:34:19.666 [2024-11-20 14:53:31.620143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:19.925 [2024-11-20 14:53:31.662026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.925 Running I/O for 10 seconds... 00:34:19.925 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:19.925 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:34:19.925 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:34:19.925 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.925 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:19.925 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.925 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:19.925 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:34:19.925 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:34:19.925 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:34:19.925 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:34:19.925 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:34:19.925 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:34:19.925 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:34:19.925 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:34:19.925 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:34:19.925 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.925 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:20.184 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.184 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=95 00:34:20.184 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 95 -ge 100 ']' 00:34:20.184 14:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:34:20.443 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:34:20.443 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:34:20.443 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:34:20.443 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:34:20.443 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.443 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:20.443 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.443 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:34:20.443 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:34:20.443 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:34:20.443 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:34:20.443 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:34:20.443 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:34:20.443 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.443 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:20.443 [2024-11-20 14:53:32.213529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7d70 is same with the state(6) to be set 00:34:20.443 [2024-11-20 14:53:32.213776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.443 [2024-11-20 14:53:32.213808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.443 [2024-11-20 14:53:32.213826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.443 [2024-11-20 14:53:32.213833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.213842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.213850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.213858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.213865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.213873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.213880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.213893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.213900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.213908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.213915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.213923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.213930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.213938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.213944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.213960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.213966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.213975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.213981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.213989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.213996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.444 [2024-11-20 14:53:32.214523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.444 [2024-11-20 14:53:32.214531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.445 [2024-11-20 14:53:32.214538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.445 [2024-11-20 14:53:32.214546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.445 [2024-11-20 14:53:32.214554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.445 [2024-11-20 14:53:32.214562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.445 [2024-11-20 14:53:32.214569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.445 [2024-11-20 14:53:32.214577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.445 [2024-11-20 14:53:32.214584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.445 [2024-11-20 14:53:32.214592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.445 [2024-11-20 14:53:32.214599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.445 [2024-11-20 14:53:32.214607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.445 [2024-11-20 14:53:32.214614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.445 [2024-11-20 14:53:32.214622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.445 [2024-11-20 14:53:32.214629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.445 [2024-11-20 14:53:32.214638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.445 [2024-11-20 14:53:32.214644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.445 [2024-11-20 14:53:32.214654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.445 [2024-11-20 14:53:32.214661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.445 [2024-11-20 14:53:32.214668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.445 [2024-11-20 14:53:32.214675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.445 [2024-11-20 14:53:32.214684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.445 [2024-11-20 14:53:32.214690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.445 [2024-11-20 14:53:32.214698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.445 [2024-11-20 14:53:32.214705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.445 [2024-11-20 14:53:32.214712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.445 [2024-11-20 14:53:32.214719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.445 [2024-11-20 14:53:32.214727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.445 [2024-11-20 14:53:32.214734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.445 [2024-11-20 14:53:32.214742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.445 [2024-11-20 14:53:32.214749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.445 [2024-11-20 14:53:32.214757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.445 [2024-11-20 14:53:32.214763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.445 [2024-11-20 14:53:32.214771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.445 [2024-11-20 14:53:32.214778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.445 [2024-11-20 14:53:32.215749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:20.445 task offset: 99840 on job bdev=Nvme0n1 fails 00:34:20.445 00:34:20.445 Latency(us) 00:34:20.445 [2024-11-20T13:53:32.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:20.445 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:20.445 Job: Nvme0n1 ended in about 0.40 seconds with error 00:34:20.445 Verification LBA range: start 0x0 length 0x400 00:34:20.445 Nvme0n1 : 0.40 1939.48 121.22 161.62 0.00 29613.17 1545.79 27126.21 00:34:20.445 [2024-11-20T13:53:32.403Z] =================================================================================================================== 00:34:20.445 [2024-11-20T13:53:32.403Z] Total : 1939.48 121.22 161.62 0.00 29613.17 1545.79 27126.21 00:34:20.445 [2024-11-20 14:53:32.218168] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:20.445 [2024-11-20 14:53:32.218195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8e500 (9): Bad file descriptor 00:34:20.445 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.445 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:34:20.445 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.445 [2024-11-20 14:53:32.219228] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:34:20.445 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:20.445 [2024-11-20 14:53:32.219336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:34:20.445 [2024-11-20 14:53:32.219359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.445 [2024-11-20 14:53:32.219370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:34:20.445 [2024-11-20 14:53:32.219377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:34:20.445 [2024-11-20 14:53:32.219384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:20.445 [2024-11-20 14:53:32.219391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb8e500 00:34:20.445 [2024-11-20 14:53:32.219410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8e500 (9): Bad file descriptor 00:34:20.445 [2024-11-20 14:53:32.219422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:20.445 [2024-11-20 14:53:32.219430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:20.445 [2024-11-20 14:53:32.219438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:20.445 [2024-11-20 14:53:32.219446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:20.445 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.445 14:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:34:21.383 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1779711 00:34:21.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1779711) - No such process 00:34:21.383 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:34:21.383 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:34:21.383 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:34:21.383 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:34:21.383 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:34:21.383 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:34:21.383 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:21.383 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:21.383 { 00:34:21.383 "params": { 00:34:21.383 "name": "Nvme$subsystem", 00:34:21.383 "trtype": "$TEST_TRANSPORT", 00:34:21.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:21.383 "adrfam": "ipv4", 00:34:21.383 "trsvcid": "$NVMF_PORT", 00:34:21.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:21.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:21.383 "hdgst": ${hdgst:-false}, 00:34:21.383 "ddgst": ${ddgst:-false} 00:34:21.383 }, 00:34:21.383 "method": "bdev_nvme_attach_controller" 00:34:21.383 } 00:34:21.383 EOF 00:34:21.383 )") 00:34:21.383 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:34:21.383 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:34:21.383 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:34:21.383 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:21.383 "params": { 00:34:21.383 "name": "Nvme0", 00:34:21.383 "trtype": "tcp", 00:34:21.383 "traddr": "10.0.0.2", 00:34:21.383 "adrfam": "ipv4", 00:34:21.383 "trsvcid": "4420", 00:34:21.383 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:21.383 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:21.383 "hdgst": false, 00:34:21.383 "ddgst": false 00:34:21.383 }, 00:34:21.383 "method": "bdev_nvme_attach_controller" 00:34:21.383 }' 00:34:21.383 [2024-11-20 14:53:33.280869] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:34:21.383 [2024-11-20 14:53:33.280917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1780123 ] 00:34:21.642 [2024-11-20 14:53:33.356074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:21.642 [2024-11-20 14:53:33.397817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.900 Running I/O for 1 seconds... 00:34:22.835 1984.00 IOPS, 124.00 MiB/s 00:34:22.835 Latency(us) 00:34:22.835 [2024-11-20T13:53:34.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:22.835 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:22.835 Verification LBA range: start 0x0 length 0x400 00:34:22.835 Nvme0n1 : 1.02 2008.33 125.52 0.00 0.00 31363.42 4929.45 27126.21 00:34:22.835 [2024-11-20T13:53:34.793Z] =================================================================================================================== 00:34:22.835 [2024-11-20T13:53:34.793Z] Total : 2008.33 125.52 0.00 0.00 31363.42 4929.45 27126.21 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:23.094 rmmod nvme_tcp 00:34:23.094 rmmod nvme_fabrics 00:34:23.094 rmmod nvme_keyring 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1779609 ']' 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1779609 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1779609 ']' 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1779609 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:23.094 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1779609 00:34:23.094 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:23.094 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:23.094 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1779609' 00:34:23.094 killing process with pid 1779609 00:34:23.094 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1779609 00:34:23.094 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1779609 00:34:23.355 [2024-11-20 14:53:35.163201] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:34:23.355 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:23.355 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:23.355 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:23.355 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:34:23.355 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:34:23.355 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:23.355 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:34:23.355 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:23.355 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:23.355 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:23.355 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:23.355 14:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:34:25.893 00:34:25.893 real 0m12.951s 00:34:25.893 user 0m18.118s 00:34:25.893 sys 0m6.239s 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:34:25.893 ************************************ 00:34:25.893 END TEST nvmf_host_management 00:34:25.893 ************************************ 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:25.893 ************************************ 00:34:25.893 START TEST nvmf_lvol 00:34:25.893 ************************************ 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:34:25.893 * Looking for test storage... 00:34:25.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:34:25.893 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:25.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.894 --rc genhtml_branch_coverage=1 00:34:25.894 --rc genhtml_function_coverage=1 00:34:25.894 --rc genhtml_legend=1 00:34:25.894 --rc geninfo_all_blocks=1 00:34:25.894 --rc geninfo_unexecuted_blocks=1 00:34:25.894 00:34:25.894 ' 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:25.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.894 --rc genhtml_branch_coverage=1 00:34:25.894 --rc genhtml_function_coverage=1 00:34:25.894 --rc genhtml_legend=1 00:34:25.894 --rc geninfo_all_blocks=1 00:34:25.894 --rc geninfo_unexecuted_blocks=1 00:34:25.894 00:34:25.894 ' 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:25.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.894 --rc genhtml_branch_coverage=1 00:34:25.894 --rc genhtml_function_coverage=1 00:34:25.894 --rc genhtml_legend=1 00:34:25.894 --rc geninfo_all_blocks=1 00:34:25.894 --rc geninfo_unexecuted_blocks=1 00:34:25.894 00:34:25.894 ' 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:25.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.894 --rc genhtml_branch_coverage=1 00:34:25.894 --rc genhtml_function_coverage=1 00:34:25.894 --rc genhtml_legend=1 00:34:25.894 --rc geninfo_all_blocks=1 00:34:25.894 --rc geninfo_unexecuted_blocks=1 00:34:25.894 00:34:25.894 ' 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.894 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:34:25.895 14:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:31.171 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:31.171 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:34:31.171 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:31.171 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:31.171 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:31.171 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:31.171 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:31.171 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:34:31.171 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:31.171 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:34:31.171 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:34:31.171 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:34:31.171 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:34:31.171 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:34:31.171 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:34:31.171 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:31.171 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:31.171 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:31.172 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:31.172 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:31.172 Found net devices under 0000:86:00.0: cvl_0_0 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:31.172 Found net devices under 0000:86:00.1: cvl_0_1 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:31.172 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:31.432 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:31.432 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:31.432 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:31.432 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:31.432 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:31.432 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:31.432 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:31.432 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:31.432 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:31.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:31.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:34:31.432 00:34:31.432 --- 10.0.0.2 ping statistics --- 00:34:31.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.432 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:34:31.432 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:31.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:31.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:34:31.432 00:34:31.432 --- 10.0.0.1 ping statistics --- 00:34:31.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.432 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:34:31.432 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:31.432 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:34:31.432 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:31.432 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:31.432 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:31.432 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:31.432 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:31.432 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:31.432 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:31.692 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:34:31.692 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:31.692 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:31.692 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:31.692 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1783851 00:34:31.692 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1783851 00:34:31.692 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:34:31.692 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1783851 ']' 00:34:31.692 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:31.692 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:31.692 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:31.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:31.692 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:31.692 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:31.692 [2024-11-20 14:53:43.460728] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:31.692 [2024-11-20 14:53:43.461674] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:34:31.692 [2024-11-20 14:53:43.461711] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:31.692 [2024-11-20 14:53:43.539543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:31.692 [2024-11-20 14:53:43.580136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:31.692 [2024-11-20 14:53:43.580175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:31.692 [2024-11-20 14:53:43.580182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:31.692 [2024-11-20 14:53:43.580188] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:31.692 [2024-11-20 14:53:43.580193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:31.692 [2024-11-20 14:53:43.581559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:31.692 [2024-11-20 14:53:43.581666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:31.692 [2024-11-20 14:53:43.581668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:31.951 [2024-11-20 14:53:43.650810] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:31.951 [2024-11-20 14:53:43.651613] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:31.951 [2024-11-20 14:53:43.651708] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:31.951 [2024-11-20 14:53:43.651897] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:31.951 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:31.951 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:34:31.951 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:31.951 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:31.951 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:31.951 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:31.951 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:31.951 [2024-11-20 14:53:43.894465] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:32.211 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:32.211 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:34:32.211 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:32.470 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:34:32.470 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:34:32.729 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:34:32.988 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=94cfeaf3-fc10-4197-9fd1-17446b85e340 00:34:32.988 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 94cfeaf3-fc10-4197-9fd1-17446b85e340 lvol 20 00:34:33.250 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ddf6f42c-392b-4389-a886-350021213612 00:34:33.250 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:33.250 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ddf6f42c-392b-4389-a886-350021213612 00:34:33.573 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:33.831 [2024-11-20 14:53:45.546372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:33.831 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:33.831 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1784166 00:34:33.831 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:34:33.831 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:34:35.206 14:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ddf6f42c-392b-4389-a886-350021213612 MY_SNAPSHOT 00:34:35.206 14:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=93de4a99-ad30-4c43-8d38-7368202668ad 00:34:35.206 14:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ddf6f42c-392b-4389-a886-350021213612 30 00:34:35.464 14:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 93de4a99-ad30-4c43-8d38-7368202668ad MY_CLONE 00:34:35.722 14:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0a576d24-9e65-4e1c-9487-4c979b2ebbc4 00:34:35.722 14:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0a576d24-9e65-4e1c-9487-4c979b2ebbc4 00:34:36.288 14:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1784166 00:34:44.398 Initializing NVMe Controllers 00:34:44.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:34:44.398 Controller IO queue size 128, less than required. 00:34:44.398 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:44.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:34:44.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:34:44.398 Initialization complete. Launching workers. 00:34:44.398 ======================================================== 00:34:44.398 Latency(us) 00:34:44.398 Device Information : IOPS MiB/s Average min max 00:34:44.398 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12137.08 47.41 10548.31 3364.66 58652.77 00:34:44.398 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12313.38 48.10 10400.24 2371.18 66545.03 00:34:44.398 ======================================================== 00:34:44.398 Total : 24450.45 95.51 10473.74 2371.18 66545.03 00:34:44.398 00:34:44.398 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:44.657 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ddf6f42c-392b-4389-a886-350021213612 00:34:44.657 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 94cfeaf3-fc10-4197-9fd1-17446b85e340 00:34:44.916 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:34:44.917 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:34:44.917 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:34:44.917 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:44.917 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:34:44.917 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:44.917 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:34:44.917 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:44.917 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:44.917 rmmod nvme_tcp 00:34:44.917 rmmod nvme_fabrics 00:34:44.917 rmmod nvme_keyring 00:34:44.917 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:44.917 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:34:44.917 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:34:44.917 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1783851 ']' 00:34:44.917 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1783851 00:34:44.917 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1783851 ']' 00:34:44.917 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1783851 00:34:44.917 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:34:44.917 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:44.917 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1783851 00:34:45.177 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:45.177 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:45.177 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1783851' 00:34:45.177 killing process with pid 1783851 00:34:45.177 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1783851 00:34:45.177 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1783851 00:34:45.177 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:45.177 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:45.177 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:45.177 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:34:45.177 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:34:45.177 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:45.177 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:34:45.177 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:45.177 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:45.177 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.177 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:45.177 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:47.715 00:34:47.715 real 0m21.871s 00:34:47.715 user 0m55.613s 00:34:47.715 sys 0m9.932s 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:47.715 ************************************ 00:34:47.715 END TEST nvmf_lvol 00:34:47.715 ************************************ 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:47.715 ************************************ 00:34:47.715 START TEST nvmf_lvs_grow 00:34:47.715 ************************************ 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:47.715 * Looking for test storage... 00:34:47.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:47.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.715 --rc genhtml_branch_coverage=1 00:34:47.715 --rc genhtml_function_coverage=1 00:34:47.715 --rc genhtml_legend=1 00:34:47.715 --rc geninfo_all_blocks=1 00:34:47.715 --rc geninfo_unexecuted_blocks=1 00:34:47.715 00:34:47.715 ' 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:47.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.715 --rc genhtml_branch_coverage=1 00:34:47.715 --rc genhtml_function_coverage=1 00:34:47.715 --rc genhtml_legend=1 00:34:47.715 --rc geninfo_all_blocks=1 00:34:47.715 --rc geninfo_unexecuted_blocks=1 00:34:47.715 00:34:47.715 ' 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:47.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.715 --rc genhtml_branch_coverage=1 00:34:47.715 --rc genhtml_function_coverage=1 00:34:47.715 --rc genhtml_legend=1 00:34:47.715 --rc geninfo_all_blocks=1 00:34:47.715 --rc geninfo_unexecuted_blocks=1 00:34:47.715 00:34:47.715 ' 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:47.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.715 --rc genhtml_branch_coverage=1 00:34:47.715 --rc genhtml_function_coverage=1 00:34:47.715 --rc genhtml_legend=1 00:34:47.715 --rc geninfo_all_blocks=1 00:34:47.715 --rc geninfo_unexecuted_blocks=1 00:34:47.715 00:34:47.715 ' 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:47.715 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:34:47.716 14:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:54.286 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:54.286 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:54.286 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:54.287 Found net devices under 0000:86:00.0: cvl_0_0 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:54.287 Found net devices under 0000:86:00.1: cvl_0_1 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:54.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:54.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:34:54.287 00:34:54.287 --- 10.0.0.2 ping statistics --- 00:34:54.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:54.287 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:54.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:54.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:34:54.287 00:34:54.287 --- 10.0.0.1 ping statistics --- 00:34:54.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:54.287 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1789583 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1789583 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1789583 ']' 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:54.287 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:54.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:54.288 [2024-11-20 14:54:05.441168] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:54.288 [2024-11-20 14:54:05.442090] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:34:54.288 [2024-11-20 14:54:05.442122] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:54.288 [2024-11-20 14:54:05.519019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.288 [2024-11-20 14:54:05.560251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:54.288 [2024-11-20 14:54:05.560288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:54.288 [2024-11-20 14:54:05.560296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:54.288 [2024-11-20 14:54:05.560302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:54.288 [2024-11-20 14:54:05.560307] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:54.288 [2024-11-20 14:54:05.560868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:54.288 [2024-11-20 14:54:05.629056] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:54.288 [2024-11-20 14:54:05.629280] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:54.288 [2024-11-20 14:54:05.881550] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:54.288 ************************************ 00:34:54.288 START TEST lvs_grow_clean 00:34:54.288 ************************************ 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:54.288 14:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:54.288 14:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:54.288 14:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:54.547 14:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c5a9e7ca-8fde-40cf-8887-e20299f9bf81 00:34:54.547 14:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5a9e7ca-8fde-40cf-8887-e20299f9bf81 00:34:54.547 14:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:54.806 14:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:54.806 14:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:54.806 14:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c5a9e7ca-8fde-40cf-8887-e20299f9bf81 lvol 150 00:34:54.806 14:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b080cdb0-8f8a-412e-93c3-f9ee52b7410d 00:34:54.806 14:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:54.806 14:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:55.066 [2024-11-20 14:54:06.913269] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:55.066 [2024-11-20 14:54:06.913405] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:55.066 true 00:34:55.066 14:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5a9e7ca-8fde-40cf-8887-e20299f9bf81 00:34:55.066 14:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:55.326 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:55.326 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:55.584 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b080cdb0-8f8a-412e-93c3-f9ee52b7410d 00:34:55.585 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:55.844 [2024-11-20 14:54:07.673747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:55.844 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:56.103 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1790063 00:34:56.103 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:56.103 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:56.103 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1790063 /var/tmp/bdevperf.sock 00:34:56.103 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1790063 ']' 00:34:56.103 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:56.103 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:56.103 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:56.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:56.103 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:56.103 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:56.103 [2024-11-20 14:54:07.939222] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:34:56.103 [2024-11-20 14:54:07.939275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1790063 ] 00:34:56.103 [2024-11-20 14:54:08.014522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.103 [2024-11-20 14:54:08.059397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.362 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:56.362 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:34:56.362 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:56.621 Nvme0n1 00:34:56.621 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:56.880 [ 00:34:56.880 { 00:34:56.880 "name": "Nvme0n1", 00:34:56.880 "aliases": [ 00:34:56.880 "b080cdb0-8f8a-412e-93c3-f9ee52b7410d" 00:34:56.880 ], 00:34:56.880 "product_name": "NVMe disk", 00:34:56.880 "block_size": 4096, 00:34:56.880 "num_blocks": 38912, 00:34:56.880 "uuid": "b080cdb0-8f8a-412e-93c3-f9ee52b7410d", 00:34:56.880 "numa_id": 1, 00:34:56.880 "assigned_rate_limits": { 00:34:56.880 "rw_ios_per_sec": 0, 00:34:56.880 "rw_mbytes_per_sec": 0, 00:34:56.880 "r_mbytes_per_sec": 0, 00:34:56.880 "w_mbytes_per_sec": 0 00:34:56.880 }, 00:34:56.880 "claimed": false, 00:34:56.880 "zoned": false, 00:34:56.880 "supported_io_types": { 00:34:56.880 "read": true, 00:34:56.880 "write": true, 00:34:56.880 "unmap": true, 00:34:56.880 "flush": true, 00:34:56.880 "reset": true, 00:34:56.880 "nvme_admin": true, 00:34:56.880 "nvme_io": true, 00:34:56.880 "nvme_io_md": false, 00:34:56.880 "write_zeroes": true, 00:34:56.880 "zcopy": false, 00:34:56.880 "get_zone_info": false, 00:34:56.880 "zone_management": false, 00:34:56.880 "zone_append": false, 00:34:56.880 "compare": true, 00:34:56.880 "compare_and_write": true, 00:34:56.880 "abort": true, 00:34:56.880 "seek_hole": false, 00:34:56.880 "seek_data": false, 00:34:56.880 "copy": true, 00:34:56.880 "nvme_iov_md": false 00:34:56.880 }, 00:34:56.880 "memory_domains": [ 00:34:56.880 { 00:34:56.880 "dma_device_id": "system", 00:34:56.880 "dma_device_type": 1 00:34:56.880 } 00:34:56.880 ], 00:34:56.880 "driver_specific": { 00:34:56.880 "nvme": [ 00:34:56.880 { 00:34:56.880 "trid": { 00:34:56.880 "trtype": "TCP", 00:34:56.880 "adrfam": "IPv4", 00:34:56.880 "traddr": "10.0.0.2", 00:34:56.880 "trsvcid": "4420", 00:34:56.880 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:56.880 }, 00:34:56.880 "ctrlr_data": { 00:34:56.880 "cntlid": 1, 00:34:56.880 "vendor_id": "0x8086", 00:34:56.880 "model_number": "SPDK bdev Controller", 00:34:56.880 "serial_number": "SPDK0", 00:34:56.880 "firmware_revision": "25.01", 00:34:56.880 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:56.880 "oacs": { 00:34:56.880 "security": 0, 00:34:56.880 "format": 0, 00:34:56.880 "firmware": 0, 00:34:56.880 "ns_manage": 0 00:34:56.880 }, 00:34:56.880 "multi_ctrlr": true, 00:34:56.880 "ana_reporting": false 00:34:56.880 }, 00:34:56.880 "vs": { 00:34:56.880 "nvme_version": "1.3" 00:34:56.880 }, 00:34:56.880 "ns_data": { 00:34:56.880 "id": 1, 00:34:56.880 "can_share": true 00:34:56.880 } 00:34:56.880 } 00:34:56.880 ], 00:34:56.880 "mp_policy": "active_passive" 00:34:56.880 } 00:34:56.880 } 00:34:56.880 ] 00:34:56.880 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1790344 00:34:56.880 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:56.880 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:57.139 Running I/O for 10 seconds... 00:34:58.085 Latency(us) 00:34:58.085 [2024-11-20T13:54:10.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:58.085 Nvme0n1 : 1.00 21971.00 85.82 0.00 0.00 0.00 0.00 0.00 00:34:58.085 [2024-11-20T13:54:10.043Z] =================================================================================================================== 00:34:58.085 [2024-11-20T13:54:10.043Z] Total : 21971.00 85.82 0.00 0.00 0.00 0.00 0.00 00:34:58.085 00:34:59.024 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c5a9e7ca-8fde-40cf-8887-e20299f9bf81 00:34:59.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:59.024 Nvme0n1 : 2.00 22178.50 86.63 0.00 0.00 0.00 0.00 0.00 00:34:59.024 [2024-11-20T13:54:10.982Z] =================================================================================================================== 00:34:59.024 [2024-11-20T13:54:10.982Z] Total : 22178.50 86.63 0.00 0.00 0.00 0.00 0.00 00:34:59.024 00:34:59.024 true 00:34:59.024 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5a9e7ca-8fde-40cf-8887-e20299f9bf81 00:34:59.024 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:59.283 14:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:59.283 14:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:59.283 14:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1790344 00:35:00.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:00.220 Nvme0n1 : 3.00 22363.33 87.36 0.00 0.00 0.00 0.00 0.00 00:35:00.220 [2024-11-20T13:54:12.178Z] =================================================================================================================== 00:35:00.220 [2024-11-20T13:54:12.178Z] Total : 22363.33 87.36 0.00 0.00 0.00 0.00 0.00 00:35:00.220 00:35:01.157 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:01.157 Nvme0n1 : 4.00 22503.50 87.90 0.00 0.00 0.00 0.00 0.00 00:35:01.157 [2024-11-20T13:54:13.115Z] =================================================================================================================== 00:35:01.157 [2024-11-20T13:54:13.115Z] Total : 22503.50 87.90 0.00 0.00 0.00 0.00 0.00 00:35:01.157 00:35:02.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:02.095 Nvme0n1 : 5.00 22597.20 88.27 0.00 0.00 0.00 0.00 0.00 00:35:02.095 [2024-11-20T13:54:14.053Z] =================================================================================================================== 00:35:02.095 [2024-11-20T13:54:14.053Z] Total : 22597.20 88.27 0.00 0.00 0.00 0.00 0.00 00:35:02.095 00:35:03.033 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:03.033 Nvme0n1 : 6.00 22651.67 88.48 0.00 0.00 0.00 0.00 0.00 00:35:03.033 [2024-11-20T13:54:14.991Z] =================================================================================================================== 00:35:03.033 [2024-11-20T13:54:14.991Z] Total : 22651.67 88.48 0.00 0.00 0.00 0.00 0.00 00:35:03.033 00:35:03.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:03.970 Nvme0n1 : 7.00 22697.43 88.66 0.00 0.00 0.00 0.00 0.00 00:35:03.970 [2024-11-20T13:54:15.928Z] =================================================================================================================== 00:35:03.970 [2024-11-20T13:54:15.928Z] Total : 22697.43 88.66 0.00 0.00 0.00 0.00 0.00 00:35:03.970 00:35:05.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:05.348 Nvme0n1 : 8.00 22733.62 88.80 0.00 0.00 0.00 0.00 0.00 00:35:05.348 [2024-11-20T13:54:17.306Z] =================================================================================================================== 00:35:05.348 [2024-11-20T13:54:17.306Z] Total : 22733.62 88.80 0.00 0.00 0.00 0.00 0.00 00:35:05.348 00:35:06.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:06.285 Nvme0n1 : 9.00 22761.78 88.91 0.00 0.00 0.00 0.00 0.00 00:35:06.285 [2024-11-20T13:54:18.243Z] =================================================================================================================== 00:35:06.285 [2024-11-20T13:54:18.243Z] Total : 22761.78 88.91 0.00 0.00 0.00 0.00 0.00 00:35:06.285 00:35:07.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:07.223 Nvme0n1 : 10.00 22784.30 89.00 0.00 0.00 0.00 0.00 0.00 00:35:07.223 [2024-11-20T13:54:19.181Z] =================================================================================================================== 00:35:07.223 [2024-11-20T13:54:19.181Z] Total : 22784.30 89.00 0.00 0.00 0.00 0.00 0.00 00:35:07.223 00:35:07.223 00:35:07.223 Latency(us) 00:35:07.223 [2024-11-20T13:54:19.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:07.223 Nvme0n1 : 10.01 22785.57 89.01 0.00 0.00 5614.63 3276.80 27126.21 00:35:07.223 [2024-11-20T13:54:19.181Z] =================================================================================================================== 00:35:07.223 [2024-11-20T13:54:19.181Z] Total : 22785.57 89.01 0.00 0.00 5614.63 3276.80 27126.21 00:35:07.223 { 00:35:07.223 "results": [ 00:35:07.223 { 00:35:07.223 "job": "Nvme0n1", 00:35:07.223 "core_mask": "0x2", 00:35:07.223 "workload": "randwrite", 00:35:07.223 "status": "finished", 00:35:07.223 "queue_depth": 128, 00:35:07.223 "io_size": 4096, 00:35:07.223 "runtime": 10.00506, 00:35:07.223 "iops": 22785.57050132633, 00:35:07.223 "mibps": 89.00613477080597, 00:35:07.223 "io_failed": 0, 00:35:07.223 "io_timeout": 0, 00:35:07.223 "avg_latency_us": 5614.632535755406, 00:35:07.223 "min_latency_us": 3276.8, 00:35:07.223 "max_latency_us": 27126.205217391303 00:35:07.223 } 00:35:07.223 ], 00:35:07.223 "core_count": 1 00:35:07.223 } 00:35:07.223 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1790063 00:35:07.223 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1790063 ']' 00:35:07.223 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1790063 00:35:07.223 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:35:07.223 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:07.223 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1790063 00:35:07.223 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:07.223 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:07.223 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1790063' 00:35:07.223 killing process with pid 1790063 00:35:07.223 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1790063 00:35:07.223 Received shutdown signal, test time was about 10.000000 seconds 00:35:07.223 00:35:07.223 Latency(us) 00:35:07.223 [2024-11-20T13:54:19.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.223 [2024-11-20T13:54:19.181Z] =================================================================================================================== 00:35:07.223 [2024-11-20T13:54:19.181Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:07.223 14:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1790063 00:35:07.223 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:07.481 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:07.740 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5a9e7ca-8fde-40cf-8887-e20299f9bf81 00:35:07.740 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:35:07.999 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:35:07.999 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:35:07.999 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:07.999 [2024-11-20 14:54:19.925343] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:35:08.259 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5a9e7ca-8fde-40cf-8887-e20299f9bf81 00:35:08.259 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:35:08.259 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5a9e7ca-8fde-40cf-8887-e20299f9bf81 00:35:08.259 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:08.259 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:08.259 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:08.259 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:08.259 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:08.259 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:08.259 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:08.259 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:35:08.259 14:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5a9e7ca-8fde-40cf-8887-e20299f9bf81 00:35:08.259 request: 00:35:08.259 { 00:35:08.259 "uuid": "c5a9e7ca-8fde-40cf-8887-e20299f9bf81", 00:35:08.259 "method": "bdev_lvol_get_lvstores", 00:35:08.259 "req_id": 1 00:35:08.259 } 00:35:08.259 Got JSON-RPC error response 00:35:08.259 response: 00:35:08.259 { 00:35:08.259 "code": -19, 00:35:08.259 "message": "No such device" 00:35:08.259 } 00:35:08.259 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:35:08.259 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:08.259 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:08.259 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:08.259 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:08.519 aio_bdev 00:35:08.519 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b080cdb0-8f8a-412e-93c3-f9ee52b7410d 00:35:08.519 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=b080cdb0-8f8a-412e-93c3-f9ee52b7410d 00:35:08.519 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:08.519 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:35:08.519 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:08.519 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:08.519 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:08.778 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b080cdb0-8f8a-412e-93c3-f9ee52b7410d -t 2000 00:35:08.778 [ 00:35:08.778 { 00:35:08.778 "name": "b080cdb0-8f8a-412e-93c3-f9ee52b7410d", 00:35:08.778 "aliases": [ 00:35:08.778 "lvs/lvol" 00:35:08.778 ], 00:35:08.778 "product_name": "Logical Volume", 00:35:08.778 "block_size": 4096, 00:35:08.778 "num_blocks": 38912, 00:35:08.778 "uuid": "b080cdb0-8f8a-412e-93c3-f9ee52b7410d", 00:35:08.778 "assigned_rate_limits": { 00:35:08.778 "rw_ios_per_sec": 0, 00:35:08.778 "rw_mbytes_per_sec": 0, 00:35:08.778 "r_mbytes_per_sec": 0, 00:35:08.778 "w_mbytes_per_sec": 0 00:35:08.778 }, 00:35:08.778 "claimed": false, 00:35:08.778 "zoned": false, 00:35:08.778 "supported_io_types": { 00:35:08.778 "read": true, 00:35:08.778 "write": true, 00:35:08.778 "unmap": true, 00:35:08.778 "flush": false, 00:35:08.778 "reset": true, 00:35:08.778 "nvme_admin": false, 00:35:08.778 "nvme_io": false, 00:35:08.778 "nvme_io_md": false, 00:35:08.778 "write_zeroes": true, 00:35:08.778 "zcopy": false, 00:35:08.778 "get_zone_info": false, 00:35:08.778 "zone_management": false, 00:35:08.778 "zone_append": false, 00:35:08.778 "compare": false, 00:35:08.778 "compare_and_write": false, 00:35:08.778 "abort": false, 00:35:08.778 "seek_hole": true, 00:35:08.778 "seek_data": true, 00:35:08.778 "copy": false, 00:35:08.778 "nvme_iov_md": false 00:35:08.778 }, 00:35:08.778 "driver_specific": { 00:35:08.778 "lvol": { 00:35:08.778 "lvol_store_uuid": "c5a9e7ca-8fde-40cf-8887-e20299f9bf81", 00:35:08.778 "base_bdev": "aio_bdev", 00:35:08.778 "thin_provision": false, 00:35:08.778 "num_allocated_clusters": 38, 00:35:08.778 "snapshot": false, 00:35:08.778 "clone": false, 00:35:08.778 "esnap_clone": false 00:35:08.778 } 00:35:08.778 } 00:35:08.778 } 00:35:08.778 ] 00:35:09.038 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:35:09.038 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5a9e7ca-8fde-40cf-8887-e20299f9bf81 00:35:09.038 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:35:09.038 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:35:09.038 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5a9e7ca-8fde-40cf-8887-e20299f9bf81 00:35:09.038 14:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:35:09.298 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:35:09.298 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b080cdb0-8f8a-412e-93c3-f9ee52b7410d 00:35:09.557 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c5a9e7ca-8fde-40cf-8887-e20299f9bf81 00:35:09.817 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:10.077 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:10.077 00:35:10.077 real 0m15.899s 00:35:10.077 user 0m15.415s 00:35:10.077 sys 0m1.501s 00:35:10.077 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:10.077 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:35:10.077 ************************************ 00:35:10.077 END TEST lvs_grow_clean 00:35:10.077 ************************************ 00:35:10.077 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:35:10.077 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:10.077 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:10.077 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:10.077 ************************************ 00:35:10.077 START TEST lvs_grow_dirty 00:35:10.077 ************************************ 00:35:10.077 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:35:10.077 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:35:10.077 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:35:10.077 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:35:10.077 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:35:10.077 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:35:10.077 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:35:10.077 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:10.077 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:10.077 14:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:10.337 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:35:10.337 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:35:10.337 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=03a269d7-8e4c-40fd-b86d-ed9c42967945 00:35:10.337 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a269d7-8e4c-40fd-b86d-ed9c42967945 00:35:10.337 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:35:10.595 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:35:10.595 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:35:10.596 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 03a269d7-8e4c-40fd-b86d-ed9c42967945 lvol 150 00:35:10.855 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0b600b57-dfa8-44c0-bcb9-048dab220fe5 00:35:10.855 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:10.855 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:35:11.114 [2024-11-20 14:54:22.869298] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:35:11.114 [2024-11-20 14:54:22.869435] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:35:11.114 true 00:35:11.114 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:35:11.114 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a269d7-8e4c-40fd-b86d-ed9c42967945 00:35:11.374 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:35:11.374 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:11.374 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0b600b57-dfa8-44c0-bcb9-048dab220fe5 00:35:11.634 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:11.893 [2024-11-20 14:54:23.673689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:11.893 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:12.152 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1792977 00:35:12.152 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:12.152 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:35:12.152 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1792977 /var/tmp/bdevperf.sock 00:35:12.152 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1792977 ']' 00:35:12.152 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:12.152 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:12.152 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:12.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:12.152 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:12.152 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:12.152 [2024-11-20 14:54:23.932738] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:35:12.152 [2024-11-20 14:54:23.932788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792977 ] 00:35:12.152 [2024-11-20 14:54:23.991045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.152 [2024-11-20 14:54:24.034768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.412 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:12.412 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:35:12.412 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:35:12.670 Nvme0n1 00:35:12.670 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:35:12.929 [ 00:35:12.929 { 00:35:12.929 "name": "Nvme0n1", 00:35:12.929 "aliases": [ 00:35:12.929 "0b600b57-dfa8-44c0-bcb9-048dab220fe5" 00:35:12.929 ], 00:35:12.929 "product_name": "NVMe disk", 00:35:12.929 "block_size": 4096, 00:35:12.929 "num_blocks": 38912, 00:35:12.929 "uuid": "0b600b57-dfa8-44c0-bcb9-048dab220fe5", 00:35:12.929 "numa_id": 1, 00:35:12.929 "assigned_rate_limits": { 00:35:12.929 "rw_ios_per_sec": 0, 00:35:12.929 "rw_mbytes_per_sec": 0, 00:35:12.929 "r_mbytes_per_sec": 0, 00:35:12.929 "w_mbytes_per_sec": 0 00:35:12.929 }, 00:35:12.929 "claimed": false, 00:35:12.929 "zoned": false, 00:35:12.929 "supported_io_types": { 00:35:12.929 "read": true, 00:35:12.929 "write": true, 00:35:12.929 "unmap": true, 00:35:12.929 "flush": true, 00:35:12.929 "reset": true, 00:35:12.929 "nvme_admin": true, 00:35:12.929 "nvme_io": true, 00:35:12.929 "nvme_io_md": false, 00:35:12.929 "write_zeroes": true, 00:35:12.929 "zcopy": false, 00:35:12.929 "get_zone_info": false, 00:35:12.929 "zone_management": false, 00:35:12.929 "zone_append": false, 00:35:12.929 "compare": true, 00:35:12.929 "compare_and_write": true, 00:35:12.929 "abort": true, 00:35:12.929 "seek_hole": false, 00:35:12.929 "seek_data": false, 00:35:12.929 "copy": true, 00:35:12.929 "nvme_iov_md": false 00:35:12.929 }, 00:35:12.929 "memory_domains": [ 00:35:12.929 { 00:35:12.929 "dma_device_id": "system", 00:35:12.929 "dma_device_type": 1 00:35:12.929 } 00:35:12.929 ], 00:35:12.929 "driver_specific": { 00:35:12.929 "nvme": [ 00:35:12.929 { 00:35:12.929 "trid": { 00:35:12.929 "trtype": "TCP", 00:35:12.929 "adrfam": "IPv4", 00:35:12.929 "traddr": "10.0.0.2", 00:35:12.929 "trsvcid": "4420", 00:35:12.929 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:35:12.929 }, 00:35:12.929 "ctrlr_data": { 00:35:12.929 "cntlid": 1, 00:35:12.929 "vendor_id": "0x8086", 00:35:12.929 "model_number": "SPDK bdev Controller", 00:35:12.929 "serial_number": "SPDK0", 00:35:12.929 "firmware_revision": "25.01", 00:35:12.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:12.929 "oacs": { 00:35:12.929 "security": 0, 00:35:12.929 "format": 0, 00:35:12.929 "firmware": 0, 00:35:12.929 "ns_manage": 0 00:35:12.930 }, 00:35:12.930 "multi_ctrlr": true, 00:35:12.930 "ana_reporting": false 00:35:12.930 }, 00:35:12.930 "vs": { 00:35:12.930 "nvme_version": "1.3" 00:35:12.930 }, 00:35:12.930 "ns_data": { 00:35:12.930 "id": 1, 00:35:12.930 "can_share": true 00:35:12.930 } 00:35:12.930 } 00:35:12.930 ], 00:35:12.930 "mp_policy": "active_passive" 00:35:12.930 } 00:35:12.930 } 00:35:12.930 ] 00:35:12.930 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:12.930 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1793010 00:35:12.930 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:35:12.930 Running I/O for 10 seconds... 00:35:14.307 Latency(us) 00:35:14.307 [2024-11-20T13:54:26.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:14.307 Nvme0n1 : 1.00 22162.00 86.57 0.00 0.00 0.00 0.00 0.00 00:35:14.307 [2024-11-20T13:54:26.265Z] =================================================================================================================== 00:35:14.307 [2024-11-20T13:54:26.265Z] Total : 22162.00 86.57 0.00 0.00 0.00 0.00 0.00 00:35:14.307 00:35:14.875 14:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 03a269d7-8e4c-40fd-b86d-ed9c42967945 00:35:15.135 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:15.135 Nvme0n1 : 2.00 22440.00 87.66 0.00 0.00 0.00 0.00 0.00 00:35:15.135 [2024-11-20T13:54:27.093Z] =================================================================================================================== 00:35:15.135 [2024-11-20T13:54:27.093Z] Total : 22440.00 87.66 0.00 0.00 0.00 0.00 0.00 00:35:15.135 00:35:15.135 true 00:35:15.135 14:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a269d7-8e4c-40fd-b86d-ed9c42967945 00:35:15.135 14:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:35:15.394 14:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:35:15.394 14:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:35:15.394 14:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1793010 00:35:15.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:15.961 Nvme0n1 : 3.00 22501.00 87.89 0.00 0.00 0.00 0.00 0.00 00:35:15.961 [2024-11-20T13:54:27.919Z] =================================================================================================================== 00:35:15.961 [2024-11-20T13:54:27.919Z] Total : 22501.00 87.89 0.00 0.00 0.00 0.00 0.00 00:35:15.961 00:35:16.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:16.988 Nvme0n1 : 4.00 22590.75 88.25 0.00 0.00 0.00 0.00 0.00 00:35:16.988 [2024-11-20T13:54:28.947Z] =================================================================================================================== 00:35:16.989 [2024-11-20T13:54:28.947Z] Total : 22590.75 88.25 0.00 0.00 0.00 0.00 0.00 00:35:16.989 00:35:17.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:17.938 Nvme0n1 : 5.00 22644.60 88.46 0.00 0.00 0.00 0.00 0.00 00:35:17.938 [2024-11-20T13:54:29.896Z] =================================================================================================================== 00:35:17.938 [2024-11-20T13:54:29.896Z] Total : 22644.60 88.46 0.00 0.00 0.00 0.00 0.00 00:35:17.938 00:35:19.318 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:19.318 Nvme0n1 : 6.00 22659.33 88.51 0.00 0.00 0.00 0.00 0.00 00:35:19.318 [2024-11-20T13:54:31.276Z] =================================================================================================================== 00:35:19.318 [2024-11-20T13:54:31.276Z] Total : 22659.33 88.51 0.00 0.00 0.00 0.00 0.00 00:35:19.318 00:35:20.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:20.254 Nvme0n1 : 7.00 22706.14 88.70 0.00 0.00 0.00 0.00 0.00 00:35:20.254 [2024-11-20T13:54:32.212Z] =================================================================================================================== 00:35:20.254 [2024-11-20T13:54:32.212Z] Total : 22706.14 88.70 0.00 0.00 0.00 0.00 0.00 00:35:20.254 00:35:21.192 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:21.192 Nvme0n1 : 8.00 22741.25 88.83 0.00 0.00 0.00 0.00 0.00 00:35:21.192 [2024-11-20T13:54:33.150Z] =================================================================================================================== 00:35:21.192 [2024-11-20T13:54:33.150Z] Total : 22741.25 88.83 0.00 0.00 0.00 0.00 0.00 00:35:21.192 00:35:22.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:22.128 Nvme0n1 : 9.00 22768.56 88.94 0.00 0.00 0.00 0.00 0.00 00:35:22.128 [2024-11-20T13:54:34.086Z] =================================================================================================================== 00:35:22.128 [2024-11-20T13:54:34.086Z] Total : 22768.56 88.94 0.00 0.00 0.00 0.00 0.00 00:35:22.128 00:35:23.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:23.065 Nvme0n1 : 10.00 22790.40 89.03 0.00 0.00 0.00 0.00 0.00 00:35:23.065 [2024-11-20T13:54:35.023Z] =================================================================================================================== 00:35:23.065 [2024-11-20T13:54:35.023Z] Total : 22790.40 89.03 0.00 0.00 0.00 0.00 0.00 00:35:23.065 00:35:23.065 00:35:23.065 Latency(us) 00:35:23.065 [2024-11-20T13:54:35.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:23.065 Nvme0n1 : 10.00 22794.45 89.04 0.00 0.00 5612.47 3205.57 24960.67 00:35:23.065 [2024-11-20T13:54:35.023Z] =================================================================================================================== 00:35:23.065 [2024-11-20T13:54:35.023Z] Total : 22794.45 89.04 0.00 0.00 5612.47 3205.57 24960.67 00:35:23.065 { 00:35:23.065 "results": [ 00:35:23.065 { 00:35:23.065 "job": "Nvme0n1", 00:35:23.065 "core_mask": "0x2", 00:35:23.065 "workload": "randwrite", 00:35:23.065 "status": "finished", 00:35:23.065 "queue_depth": 128, 00:35:23.065 "io_size": 4096, 00:35:23.065 "runtime": 10.003838, 00:35:23.065 "iops": 22794.451489518324, 00:35:23.065 "mibps": 89.04082613093095, 00:35:23.065 "io_failed": 0, 00:35:23.065 "io_timeout": 0, 00:35:23.065 "avg_latency_us": 5612.4734749661375, 00:35:23.065 "min_latency_us": 3205.5652173913045, 00:35:23.065 "max_latency_us": 24960.667826086956 00:35:23.065 } 00:35:23.065 ], 00:35:23.065 "core_count": 1 00:35:23.065 } 00:35:23.065 14:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1792977 00:35:23.065 14:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1792977 ']' 00:35:23.065 14:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1792977 00:35:23.065 14:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:35:23.065 14:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:23.065 14:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1792977 00:35:23.065 14:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:23.065 14:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:23.065 14:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1792977' 00:35:23.065 killing process with pid 1792977 00:35:23.065 14:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1792977 00:35:23.065 Received shutdown signal, test time was about 10.000000 seconds 00:35:23.065 00:35:23.065 Latency(us) 00:35:23.065 [2024-11-20T13:54:35.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.065 [2024-11-20T13:54:35.023Z] =================================================================================================================== 00:35:23.065 [2024-11-20T13:54:35.023Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:23.065 14:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1792977 00:35:23.324 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:23.583 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a269d7-8e4c-40fd-b86d-ed9c42967945 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1789583 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1789583 00:35:23.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1789583 Killed "${NVMF_APP[@]}" "$@" 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1794809 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1794809 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1794809 ']' 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:23.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:23.843 14:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:24.102 [2024-11-20 14:54:35.834352] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:24.102 [2024-11-20 14:54:35.835262] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:35:24.102 [2024-11-20 14:54:35.835296] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:24.102 [2024-11-20 14:54:35.911562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.102 [2024-11-20 14:54:35.950376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:24.102 [2024-11-20 14:54:35.950408] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:24.102 [2024-11-20 14:54:35.950416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:24.102 [2024-11-20 14:54:35.950421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:24.102 [2024-11-20 14:54:35.950426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:24.102 [2024-11-20 14:54:35.950964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:24.102 [2024-11-20 14:54:36.019706] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:24.102 [2024-11-20 14:54:36.019952] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:24.102 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:24.102 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:35:24.102 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:24.102 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:24.102 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:24.362 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:24.362 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:24.362 [2024-11-20 14:54:36.272330] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:35:24.362 [2024-11-20 14:54:36.272530] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:35:24.362 [2024-11-20 14:54:36.272614] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:35:24.362 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:35:24.362 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0b600b57-dfa8-44c0-bcb9-048dab220fe5 00:35:24.362 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0b600b57-dfa8-44c0-bcb9-048dab220fe5 00:35:24.362 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:24.362 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:35:24.362 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:24.362 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:24.362 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:24.621 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0b600b57-dfa8-44c0-bcb9-048dab220fe5 -t 2000 00:35:24.881 [ 00:35:24.881 { 00:35:24.881 "name": "0b600b57-dfa8-44c0-bcb9-048dab220fe5", 00:35:24.881 "aliases": [ 00:35:24.881 "lvs/lvol" 00:35:24.881 ], 00:35:24.881 "product_name": "Logical Volume", 00:35:24.881 "block_size": 4096, 00:35:24.881 "num_blocks": 38912, 00:35:24.881 "uuid": "0b600b57-dfa8-44c0-bcb9-048dab220fe5", 00:35:24.881 "assigned_rate_limits": { 00:35:24.881 "rw_ios_per_sec": 0, 00:35:24.881 "rw_mbytes_per_sec": 0, 00:35:24.881 "r_mbytes_per_sec": 0, 00:35:24.881 "w_mbytes_per_sec": 0 00:35:24.881 }, 00:35:24.881 "claimed": false, 00:35:24.881 "zoned": false, 00:35:24.881 "supported_io_types": { 00:35:24.881 "read": true, 00:35:24.881 "write": true, 00:35:24.881 "unmap": true, 00:35:24.881 "flush": false, 00:35:24.881 "reset": true, 00:35:24.881 "nvme_admin": false, 00:35:24.881 "nvme_io": false, 00:35:24.881 "nvme_io_md": false, 00:35:24.881 "write_zeroes": true, 00:35:24.881 "zcopy": false, 00:35:24.881 "get_zone_info": false, 00:35:24.881 "zone_management": false, 00:35:24.881 "zone_append": false, 00:35:24.881 "compare": false, 00:35:24.881 "compare_and_write": false, 00:35:24.881 "abort": false, 00:35:24.881 "seek_hole": true, 00:35:24.881 "seek_data": true, 00:35:24.881 "copy": false, 00:35:24.881 "nvme_iov_md": false 00:35:24.881 }, 00:35:24.881 "driver_specific": { 00:35:24.881 "lvol": { 00:35:24.881 "lvol_store_uuid": "03a269d7-8e4c-40fd-b86d-ed9c42967945", 00:35:24.881 "base_bdev": "aio_bdev", 00:35:24.881 "thin_provision": false, 00:35:24.881 "num_allocated_clusters": 38, 00:35:24.881 "snapshot": false, 00:35:24.881 "clone": false, 00:35:24.881 "esnap_clone": false 00:35:24.881 } 00:35:24.881 } 00:35:24.881 } 00:35:24.881 ] 00:35:24.881 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:35:24.881 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a269d7-8e4c-40fd-b86d-ed9c42967945 00:35:24.881 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:35:25.141 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:35:25.141 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a269d7-8e4c-40fd-b86d-ed9c42967945 00:35:25.141 14:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:35:25.141 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:35:25.141 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:25.401 [2024-11-20 14:54:37.259439] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:35:25.401 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a269d7-8e4c-40fd-b86d-ed9c42967945 00:35:25.401 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:35:25.401 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a269d7-8e4c-40fd-b86d-ed9c42967945 00:35:25.401 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:25.401 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:25.401 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:25.401 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:25.401 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:25.401 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:25.401 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:25.401 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:35:25.401 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a269d7-8e4c-40fd-b86d-ed9c42967945 00:35:25.660 request: 00:35:25.660 { 00:35:25.660 "uuid": "03a269d7-8e4c-40fd-b86d-ed9c42967945", 00:35:25.660 "method": "bdev_lvol_get_lvstores", 00:35:25.660 "req_id": 1 00:35:25.660 } 00:35:25.660 Got JSON-RPC error response 00:35:25.660 response: 00:35:25.660 { 00:35:25.660 "code": -19, 00:35:25.660 "message": "No such device" 00:35:25.660 } 00:35:25.660 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:35:25.660 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:25.660 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:25.660 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:25.660 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:25.919 aio_bdev 00:35:25.919 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0b600b57-dfa8-44c0-bcb9-048dab220fe5 00:35:25.919 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0b600b57-dfa8-44c0-bcb9-048dab220fe5 00:35:25.919 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:25.919 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:35:25.919 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:25.919 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:25.919 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:26.178 14:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0b600b57-dfa8-44c0-bcb9-048dab220fe5 -t 2000 00:35:26.178 [ 00:35:26.178 { 00:35:26.178 "name": "0b600b57-dfa8-44c0-bcb9-048dab220fe5", 00:35:26.178 "aliases": [ 00:35:26.178 "lvs/lvol" 00:35:26.178 ], 00:35:26.178 "product_name": "Logical Volume", 00:35:26.178 "block_size": 4096, 00:35:26.178 "num_blocks": 38912, 00:35:26.178 "uuid": "0b600b57-dfa8-44c0-bcb9-048dab220fe5", 00:35:26.178 "assigned_rate_limits": { 00:35:26.178 "rw_ios_per_sec": 0, 00:35:26.178 "rw_mbytes_per_sec": 0, 00:35:26.178 "r_mbytes_per_sec": 0, 00:35:26.178 "w_mbytes_per_sec": 0 00:35:26.178 }, 00:35:26.178 "claimed": false, 00:35:26.178 "zoned": false, 00:35:26.178 "supported_io_types": { 00:35:26.178 "read": true, 00:35:26.178 "write": true, 00:35:26.178 "unmap": true, 00:35:26.178 "flush": false, 00:35:26.178 "reset": true, 00:35:26.178 "nvme_admin": false, 00:35:26.178 "nvme_io": false, 00:35:26.178 "nvme_io_md": false, 00:35:26.178 "write_zeroes": true, 00:35:26.178 "zcopy": false, 00:35:26.178 "get_zone_info": false, 00:35:26.178 "zone_management": false, 00:35:26.179 "zone_append": false, 00:35:26.179 "compare": false, 00:35:26.179 "compare_and_write": false, 00:35:26.179 "abort": false, 00:35:26.179 "seek_hole": true, 00:35:26.179 "seek_data": true, 00:35:26.179 "copy": false, 00:35:26.179 "nvme_iov_md": false 00:35:26.179 }, 00:35:26.179 "driver_specific": { 00:35:26.179 "lvol": { 00:35:26.179 "lvol_store_uuid": "03a269d7-8e4c-40fd-b86d-ed9c42967945", 00:35:26.179 "base_bdev": "aio_bdev", 00:35:26.179 "thin_provision": false, 00:35:26.179 "num_allocated_clusters": 38, 00:35:26.179 "snapshot": false, 00:35:26.179 "clone": false, 00:35:26.179 "esnap_clone": false 00:35:26.179 } 00:35:26.179 } 00:35:26.179 } 00:35:26.179 ] 00:35:26.179 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:35:26.179 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a269d7-8e4c-40fd-b86d-ed9c42967945 00:35:26.179 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:35:26.437 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:35:26.437 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 03a269d7-8e4c-40fd-b86d-ed9c42967945 00:35:26.437 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:35:26.696 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:35:26.696 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0b600b57-dfa8-44c0-bcb9-048dab220fe5 00:35:26.956 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 03a269d7-8e4c-40fd-b86d-ed9c42967945 00:35:27.215 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:27.215 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:27.215 00:35:27.215 real 0m17.296s 00:35:27.215 user 0m34.829s 00:35:27.215 sys 0m3.735s 00:35:27.215 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:27.215 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:27.215 ************************************ 00:35:27.215 END TEST lvs_grow_dirty 00:35:27.215 ************************************ 00:35:27.481 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:35:27.481 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:35:27.481 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:35:27.481 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:35:27.481 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:35:27.481 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:35:27.481 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:35:27.481 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:35:27.481 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:35:27.481 nvmf_trace.0 00:35:27.481 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:35:27.481 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:27.482 rmmod nvme_tcp 00:35:27.482 rmmod nvme_fabrics 00:35:27.482 rmmod nvme_keyring 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1794809 ']' 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1794809 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1794809 ']' 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1794809 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1794809 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1794809' 00:35:27.482 killing process with pid 1794809 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1794809 00:35:27.482 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1794809 00:35:27.740 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:27.741 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:27.741 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:27.741 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:35:27.741 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:35:27.741 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:27.741 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:35:27.741 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:27.741 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:27.741 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:27.741 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:27.741 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:29.785 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:29.785 00:35:29.785 real 0m42.374s 00:35:29.785 user 0m52.726s 00:35:29.785 sys 0m10.149s 00:35:29.785 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:29.785 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:29.785 ************************************ 00:35:29.785 END TEST nvmf_lvs_grow 00:35:29.785 ************************************ 00:35:29.785 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:35:29.785 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:29.785 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:29.785 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:29.785 ************************************ 00:35:29.785 START TEST nvmf_bdev_io_wait 00:35:29.785 ************************************ 00:35:29.785 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:35:29.785 * Looking for test storage... 00:35:29.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:29.785 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:29.785 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:35:29.785 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:30.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.046 --rc genhtml_branch_coverage=1 00:35:30.046 --rc genhtml_function_coverage=1 00:35:30.046 --rc genhtml_legend=1 00:35:30.046 --rc geninfo_all_blocks=1 00:35:30.046 --rc geninfo_unexecuted_blocks=1 00:35:30.046 00:35:30.046 ' 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:30.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.046 --rc genhtml_branch_coverage=1 00:35:30.046 --rc genhtml_function_coverage=1 00:35:30.046 --rc genhtml_legend=1 00:35:30.046 --rc geninfo_all_blocks=1 00:35:30.046 --rc geninfo_unexecuted_blocks=1 00:35:30.046 00:35:30.046 ' 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:30.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.046 --rc genhtml_branch_coverage=1 00:35:30.046 --rc genhtml_function_coverage=1 00:35:30.046 --rc genhtml_legend=1 00:35:30.046 --rc geninfo_all_blocks=1 00:35:30.046 --rc geninfo_unexecuted_blocks=1 00:35:30.046 00:35:30.046 ' 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:30.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:30.046 --rc genhtml_branch_coverage=1 00:35:30.046 --rc genhtml_function_coverage=1 00:35:30.046 --rc genhtml_legend=1 00:35:30.046 --rc geninfo_all_blocks=1 00:35:30.046 --rc geninfo_unexecuted_blocks=1 00:35:30.046 00:35:30.046 ' 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:30.046 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:35:30.047 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:36.622 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:36.622 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:36.622 Found net devices under 0000:86:00.0: cvl_0_0 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:36.622 Found net devices under 0000:86:00.1: cvl_0_1 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:36.622 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:36.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:36.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:35:36.623 00:35:36.623 --- 10.0.0.2 ping statistics --- 00:35:36.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:36.623 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:36.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:36.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:35:36.623 00:35:36.623 --- 10.0.0.1 ping statistics --- 00:35:36.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:36.623 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1798824 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1798824 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1798824 ']' 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:36.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:36.623 [2024-11-20 14:54:47.807702] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:36.623 [2024-11-20 14:54:47.808640] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:35:36.623 [2024-11-20 14:54:47.808672] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:36.623 [2024-11-20 14:54:47.888229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:36.623 [2024-11-20 14:54:47.931793] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:36.623 [2024-11-20 14:54:47.931833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:36.623 [2024-11-20 14:54:47.931840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:36.623 [2024-11-20 14:54:47.931846] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:36.623 [2024-11-20 14:54:47.931852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:36.623 [2024-11-20 14:54:47.933453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:36.623 [2024-11-20 14:54:47.933574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:36.623 [2024-11-20 14:54:47.933683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:36.623 [2024-11-20 14:54:47.933684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:36.623 [2024-11-20 14:54:47.934051] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.623 14:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:36.623 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.623 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:35:36.623 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.623 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:36.623 [2024-11-20 14:54:48.059238] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:36.623 [2024-11-20 14:54:48.059955] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:36.623 [2024-11-20 14:54:48.060013] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:36.624 [2024-11-20 14:54:48.060151] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:36.624 [2024-11-20 14:54:48.070273] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:36.624 Malloc0 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:36.624 [2024-11-20 14:54:48.138507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1798861 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1798864 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:36.624 { 00:35:36.624 "params": { 00:35:36.624 "name": "Nvme$subsystem", 00:35:36.624 "trtype": "$TEST_TRANSPORT", 00:35:36.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.624 "adrfam": "ipv4", 00:35:36.624 "trsvcid": "$NVMF_PORT", 00:35:36.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.624 "hdgst": ${hdgst:-false}, 00:35:36.624 "ddgst": ${ddgst:-false} 00:35:36.624 }, 00:35:36.624 "method": "bdev_nvme_attach_controller" 00:35:36.624 } 00:35:36.624 EOF 00:35:36.624 )") 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1798866 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:36.624 { 00:35:36.624 "params": { 00:35:36.624 "name": "Nvme$subsystem", 00:35:36.624 "trtype": "$TEST_TRANSPORT", 00:35:36.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.624 "adrfam": "ipv4", 00:35:36.624 "trsvcid": "$NVMF_PORT", 00:35:36.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.624 "hdgst": ${hdgst:-false}, 00:35:36.624 "ddgst": ${ddgst:-false} 00:35:36.624 }, 00:35:36.624 "method": "bdev_nvme_attach_controller" 00:35:36.624 } 00:35:36.624 EOF 00:35:36.624 )") 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1798869 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:36.624 { 00:35:36.624 "params": { 00:35:36.624 "name": "Nvme$subsystem", 00:35:36.624 "trtype": "$TEST_TRANSPORT", 00:35:36.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.624 "adrfam": "ipv4", 00:35:36.624 "trsvcid": "$NVMF_PORT", 00:35:36.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.624 "hdgst": ${hdgst:-false}, 00:35:36.624 "ddgst": ${ddgst:-false} 00:35:36.624 }, 00:35:36.624 "method": "bdev_nvme_attach_controller" 00:35:36.624 } 00:35:36.624 EOF 00:35:36.624 )") 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:35:36.624 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:36.625 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:36.625 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:36.625 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:36.625 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:36.625 { 00:35:36.625 "params": { 00:35:36.625 "name": "Nvme$subsystem", 00:35:36.625 "trtype": "$TEST_TRANSPORT", 00:35:36.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.625 "adrfam": "ipv4", 00:35:36.625 "trsvcid": "$NVMF_PORT", 00:35:36.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.625 "hdgst": ${hdgst:-false}, 00:35:36.625 "ddgst": ${ddgst:-false} 00:35:36.625 }, 00:35:36.625 "method": "bdev_nvme_attach_controller" 00:35:36.625 } 00:35:36.625 EOF 00:35:36.625 )") 00:35:36.625 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:36.625 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1798861 00:35:36.625 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:36.625 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:36.625 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:36.625 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:36.625 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:36.625 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:36.625 "params": { 00:35:36.625 "name": "Nvme1", 00:35:36.625 "trtype": "tcp", 00:35:36.625 "traddr": "10.0.0.2", 00:35:36.625 "adrfam": "ipv4", 00:35:36.625 "trsvcid": "4420", 00:35:36.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:36.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:36.625 "hdgst": false, 00:35:36.625 "ddgst": false 00:35:36.625 }, 00:35:36.625 "method": "bdev_nvme_attach_controller" 00:35:36.625 }' 00:35:36.625 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:36.625 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:36.625 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:36.625 "params": { 00:35:36.625 "name": "Nvme1", 00:35:36.625 "trtype": "tcp", 00:35:36.625 "traddr": "10.0.0.2", 00:35:36.625 "adrfam": "ipv4", 00:35:36.625 "trsvcid": "4420", 00:35:36.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:36.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:36.625 "hdgst": false, 00:35:36.625 "ddgst": false 00:35:36.625 }, 00:35:36.625 "method": "bdev_nvme_attach_controller" 00:35:36.625 }' 00:35:36.625 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:36.625 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:36.625 "params": { 00:35:36.625 "name": "Nvme1", 00:35:36.625 "trtype": "tcp", 00:35:36.625 "traddr": "10.0.0.2", 00:35:36.625 "adrfam": "ipv4", 00:35:36.625 "trsvcid": "4420", 00:35:36.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:36.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:36.625 "hdgst": false, 00:35:36.625 "ddgst": false 00:35:36.625 }, 00:35:36.625 "method": "bdev_nvme_attach_controller" 00:35:36.625 }' 00:35:36.625 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:36.625 14:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:36.625 "params": { 00:35:36.625 "name": "Nvme1", 00:35:36.625 "trtype": "tcp", 00:35:36.625 "traddr": "10.0.0.2", 00:35:36.625 "adrfam": "ipv4", 00:35:36.625 "trsvcid": "4420", 00:35:36.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:36.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:36.625 "hdgst": false, 00:35:36.625 "ddgst": false 00:35:36.625 }, 00:35:36.625 "method": "bdev_nvme_attach_controller" 00:35:36.625 }' 00:35:36.625 [2024-11-20 14:54:48.190311] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:35:36.625 [2024-11-20 14:54:48.190352] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:35:36.625 [2024-11-20 14:54:48.191352] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:35:36.625 [2024-11-20 14:54:48.191404] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:35:36.625 [2024-11-20 14:54:48.192733] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:35:36.625 [2024-11-20 14:54:48.192775] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:35:36.625 [2024-11-20 14:54:48.195803] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:35:36.625 [2024-11-20 14:54:48.195843] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:35:36.625 [2024-11-20 14:54:48.389077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.625 [2024-11-20 14:54:48.432118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:36.625 [2024-11-20 14:54:48.485858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.625 [2024-11-20 14:54:48.537388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.625 [2024-11-20 14:54:48.539316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:36.884 [2024-11-20 14:54:48.581329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:36.884 [2024-11-20 14:54:48.597400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.884 [2024-11-20 14:54:48.640358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:36.884 Running I/O for 1 seconds... 00:35:36.884 Running I/O for 1 seconds... 00:35:36.884 Running I/O for 1 seconds... 00:35:36.884 Running I/O for 1 seconds... 00:35:37.816 7357.00 IOPS, 28.74 MiB/s [2024-11-20T13:54:49.774Z] 11869.00 IOPS, 46.36 MiB/s 00:35:37.816 Latency(us) 00:35:37.816 [2024-11-20T13:54:49.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:37.816 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:35:37.816 Nvme1n1 : 1.01 11928.36 46.60 0.00 0.00 10694.30 5185.89 16526.47 00:35:37.816 [2024-11-20T13:54:49.774Z] =================================================================================================================== 00:35:37.816 [2024-11-20T13:54:49.774Z] Total : 11928.36 46.60 0.00 0.00 10694.30 5185.89 16526.47 00:35:37.816 00:35:37.816 Latency(us) 00:35:37.816 [2024-11-20T13:54:49.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:37.816 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:35:37.816 Nvme1n1 : 1.02 7374.44 28.81 0.00 0.00 17233.30 3647.22 22567.18 00:35:37.816 [2024-11-20T13:54:49.774Z] =================================================================================================================== 00:35:37.816 [2024-11-20T13:54:49.774Z] Total : 7374.44 28.81 0.00 0.00 17233.30 3647.22 22567.18 00:35:37.816 7473.00 IOPS, 29.19 MiB/s 00:35:37.816 Latency(us) 00:35:37.816 [2024-11-20T13:54:49.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:37.816 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:35:37.816 Nvme1n1 : 1.01 7611.56 29.73 0.00 0.00 16784.43 2706.92 31457.28 00:35:37.816 [2024-11-20T13:54:49.775Z] =================================================================================================================== 00:35:37.817 [2024-11-20T13:54:49.775Z] Total : 7611.56 29.73 0.00 0.00 16784.43 2706.92 31457.28 00:35:38.075 237000.00 IOPS, 925.78 MiB/s 00:35:38.075 Latency(us) 00:35:38.075 [2024-11-20T13:54:50.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.075 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:35:38.075 Nvme1n1 : 1.00 236627.98 924.33 0.00 0.00 538.23 227.95 1552.92 00:35:38.075 [2024-11-20T13:54:50.033Z] =================================================================================================================== 00:35:38.075 [2024-11-20T13:54:50.033Z] Total : 236627.98 924.33 0.00 0.00 538.23 227.95 1552.92 00:35:38.075 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1798864 00:35:38.075 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1798866 00:35:38.075 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1798869 00:35:38.075 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:38.075 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.075 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:38.075 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.075 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:35:38.075 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:35:38.075 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:38.075 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:35:38.075 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:38.075 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:35:38.075 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:38.075 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:38.075 rmmod nvme_tcp 00:35:38.075 rmmod nvme_fabrics 00:35:38.075 rmmod nvme_keyring 00:35:38.075 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:38.075 14:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:35:38.075 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:35:38.075 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1798824 ']' 00:35:38.075 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1798824 00:35:38.075 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1798824 ']' 00:35:38.075 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1798824 00:35:38.075 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:35:38.075 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:38.075 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1798824 00:35:38.335 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:38.335 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:38.335 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1798824' 00:35:38.335 killing process with pid 1798824 00:35:38.335 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1798824 00:35:38.335 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1798824 00:35:38.335 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:38.335 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:38.335 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:38.335 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:35:38.335 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:35:38.335 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:38.335 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:35:38.335 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:38.335 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:38.335 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.335 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:38.335 14:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:40.872 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:40.872 00:35:40.872 real 0m10.643s 00:35:40.872 user 0m14.578s 00:35:40.872 sys 0m6.339s 00:35:40.872 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:40.872 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:40.872 ************************************ 00:35:40.872 END TEST nvmf_bdev_io_wait 00:35:40.872 ************************************ 00:35:40.872 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:40.872 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:40.872 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:40.872 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:40.872 ************************************ 00:35:40.872 START TEST nvmf_queue_depth 00:35:40.872 ************************************ 00:35:40.872 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:40.872 * Looking for test storage... 00:35:40.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:40.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.873 --rc genhtml_branch_coverage=1 00:35:40.873 --rc genhtml_function_coverage=1 00:35:40.873 --rc genhtml_legend=1 00:35:40.873 --rc geninfo_all_blocks=1 00:35:40.873 --rc geninfo_unexecuted_blocks=1 00:35:40.873 00:35:40.873 ' 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:40.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.873 --rc genhtml_branch_coverage=1 00:35:40.873 --rc genhtml_function_coverage=1 00:35:40.873 --rc genhtml_legend=1 00:35:40.873 --rc geninfo_all_blocks=1 00:35:40.873 --rc geninfo_unexecuted_blocks=1 00:35:40.873 00:35:40.873 ' 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:40.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.873 --rc genhtml_branch_coverage=1 00:35:40.873 --rc genhtml_function_coverage=1 00:35:40.873 --rc genhtml_legend=1 00:35:40.873 --rc geninfo_all_blocks=1 00:35:40.873 --rc geninfo_unexecuted_blocks=1 00:35:40.873 00:35:40.873 ' 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:40.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.873 --rc genhtml_branch_coverage=1 00:35:40.873 --rc genhtml_function_coverage=1 00:35:40.873 --rc genhtml_legend=1 00:35:40.873 --rc geninfo_all_blocks=1 00:35:40.873 --rc geninfo_unexecuted_blocks=1 00:35:40.873 00:35:40.873 ' 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:40.873 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:35:40.874 14:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:47.445 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:47.445 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:47.445 Found net devices under 0000:86:00.0: cvl_0_0 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.445 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:47.446 Found net devices under 0000:86:00.1: cvl_0_1 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:47.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:47.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:35:47.446 00:35:47.446 --- 10.0.0.2 ping statistics --- 00:35:47.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:47.446 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:47.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:47.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:35:47.446 00:35:47.446 --- 10.0.0.1 ping statistics --- 00:35:47.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:47.446 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1802632 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1802632 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1802632 ']' 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:47.446 [2024-11-20 14:54:58.470250] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:47.446 [2024-11-20 14:54:58.471212] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:35:47.446 [2024-11-20 14:54:58.471250] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:47.446 [2024-11-20 14:54:58.550625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:47.446 [2024-11-20 14:54:58.594158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:47.446 [2024-11-20 14:54:58.594191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:47.446 [2024-11-20 14:54:58.594198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:47.446 [2024-11-20 14:54:58.594205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:47.446 [2024-11-20 14:54:58.594210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:47.446 [2024-11-20 14:54:58.594755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:47.446 [2024-11-20 14:54:58.662889] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:47.446 [2024-11-20 14:54:58.663119] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:47.446 [2024-11-20 14:54:58.731439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:47.446 Malloc0 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:47.446 [2024-11-20 14:54:58.803562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1802834 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1802834 /var/tmp/bdevperf.sock 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1802834 ']' 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:47.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:47.446 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:47.446 [2024-11-20 14:54:58.852798] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:35:47.446 [2024-11-20 14:54:58.852841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1802834 ] 00:35:47.446 [2024-11-20 14:54:58.926023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:47.446 [2024-11-20 14:54:58.968845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:47.446 14:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:47.446 14:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:35:47.446 14:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:47.446 14:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.446 14:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:47.446 NVMe0n1 00:35:47.446 14:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.446 14:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:47.446 Running I/O for 10 seconds... 00:35:49.320 11356.00 IOPS, 44.36 MiB/s [2024-11-20T13:55:02.656Z] 11762.00 IOPS, 45.95 MiB/s [2024-11-20T13:55:03.592Z] 11925.33 IOPS, 46.58 MiB/s [2024-11-20T13:55:04.529Z] 12014.00 IOPS, 46.93 MiB/s [2024-11-20T13:55:05.467Z] 12006.60 IOPS, 46.90 MiB/s [2024-11-20T13:55:06.404Z] 12049.67 IOPS, 47.07 MiB/s [2024-11-20T13:55:07.341Z] 12044.57 IOPS, 47.05 MiB/s [2024-11-20T13:55:08.721Z] 12047.88 IOPS, 47.06 MiB/s [2024-11-20T13:55:09.289Z] 12091.11 IOPS, 47.23 MiB/s [2024-11-20T13:55:09.549Z] 12108.00 IOPS, 47.30 MiB/s 00:35:57.591 Latency(us) 00:35:57.591 [2024-11-20T13:55:09.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:57.591 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:35:57.591 Verification LBA range: start 0x0 length 0x4000 00:35:57.591 NVMe0n1 : 10.05 12142.20 47.43 0.00 0.00 84029.77 13107.20 56303.97 00:35:57.591 [2024-11-20T13:55:09.549Z] =================================================================================================================== 00:35:57.591 [2024-11-20T13:55:09.549Z] Total : 12142.20 47.43 0.00 0.00 84029.77 13107.20 56303.97 00:35:57.591 { 00:35:57.591 "results": [ 00:35:57.591 { 00:35:57.591 "job": "NVMe0n1", 00:35:57.591 "core_mask": "0x1", 00:35:57.591 "workload": "verify", 00:35:57.591 "status": "finished", 00:35:57.591 "verify_range": { 00:35:57.591 "start": 0, 00:35:57.591 "length": 16384 00:35:57.591 }, 00:35:57.591 "queue_depth": 1024, 00:35:57.591 "io_size": 4096, 00:35:57.591 "runtime": 10.051641, 00:35:57.591 "iops": 12142.19648314141, 00:35:57.591 "mibps": 47.43045501227113, 00:35:57.591 "io_failed": 0, 00:35:57.591 "io_timeout": 0, 00:35:57.591 "avg_latency_us": 84029.77197694298, 00:35:57.591 "min_latency_us": 13107.2, 00:35:57.591 "max_latency_us": 56303.97217391304 00:35:57.591 } 00:35:57.591 ], 00:35:57.591 "core_count": 1 00:35:57.591 } 00:35:57.591 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1802834 00:35:57.591 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1802834 ']' 00:35:57.591 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1802834 00:35:57.591 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:35:57.591 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:57.591 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1802834 00:35:57.591 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:57.591 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:57.591 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1802834' 00:35:57.591 killing process with pid 1802834 00:35:57.591 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1802834 00:35:57.591 Received shutdown signal, test time was about 10.000000 seconds 00:35:57.591 00:35:57.591 Latency(us) 00:35:57.591 [2024-11-20T13:55:09.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:57.591 [2024-11-20T13:55:09.549Z] =================================================================================================================== 00:35:57.591 [2024-11-20T13:55:09.549Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:57.591 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1802834 00:35:57.850 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:35:57.850 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:35:57.850 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:57.850 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:35:57.850 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:57.850 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:35:57.850 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:57.850 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:57.850 rmmod nvme_tcp 00:35:57.850 rmmod nvme_fabrics 00:35:57.850 rmmod nvme_keyring 00:35:57.850 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:57.850 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:35:57.850 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:35:57.851 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1802632 ']' 00:35:57.851 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1802632 00:35:57.851 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1802632 ']' 00:35:57.851 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1802632 00:35:57.851 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:35:57.851 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:57.851 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1802632 00:35:57.851 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:57.851 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:57.851 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1802632' 00:35:57.851 killing process with pid 1802632 00:35:57.851 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1802632 00:35:57.851 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1802632 00:35:58.109 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:58.109 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:58.109 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:58.109 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:35:58.109 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:35:58.109 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:58.109 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:35:58.109 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:58.109 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:58.109 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:58.109 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:58.109 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.016 14:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:00.016 00:36:00.016 real 0m19.621s 00:36:00.016 user 0m22.657s 00:36:00.016 sys 0m6.265s 00:36:00.016 14:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:00.016 14:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:00.016 ************************************ 00:36:00.016 END TEST nvmf_queue_depth 00:36:00.016 ************************************ 00:36:00.276 14:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:36:00.276 14:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:00.276 14:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:00.276 14:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:00.276 ************************************ 00:36:00.276 START TEST nvmf_target_multipath 00:36:00.276 ************************************ 00:36:00.276 14:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:36:00.276 * Looking for test storage... 00:36:00.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:36:00.276 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:00.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.277 --rc genhtml_branch_coverage=1 00:36:00.277 --rc genhtml_function_coverage=1 00:36:00.277 --rc genhtml_legend=1 00:36:00.277 --rc geninfo_all_blocks=1 00:36:00.277 --rc geninfo_unexecuted_blocks=1 00:36:00.277 00:36:00.277 ' 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:00.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.277 --rc genhtml_branch_coverage=1 00:36:00.277 --rc genhtml_function_coverage=1 00:36:00.277 --rc genhtml_legend=1 00:36:00.277 --rc geninfo_all_blocks=1 00:36:00.277 --rc geninfo_unexecuted_blocks=1 00:36:00.277 00:36:00.277 ' 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:00.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.277 --rc genhtml_branch_coverage=1 00:36:00.277 --rc genhtml_function_coverage=1 00:36:00.277 --rc genhtml_legend=1 00:36:00.277 --rc geninfo_all_blocks=1 00:36:00.277 --rc geninfo_unexecuted_blocks=1 00:36:00.277 00:36:00.277 ' 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:00.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.277 --rc genhtml_branch_coverage=1 00:36:00.277 --rc genhtml_function_coverage=1 00:36:00.277 --rc genhtml_legend=1 00:36:00.277 --rc geninfo_all_blocks=1 00:36:00.277 --rc geninfo_unexecuted_blocks=1 00:36:00.277 00:36:00.277 ' 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:36:00.277 14:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:06.854 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:06.854 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:06.854 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:06.855 Found net devices under 0000:86:00.0: cvl_0_0 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:06.855 Found net devices under 0000:86:00.1: cvl_0_1 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:06.855 14:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:06.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:06.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:36:06.855 00:36:06.855 --- 10.0.0.2 ping statistics --- 00:36:06.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:06.855 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:06.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:06.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:36:06.855 00:36:06.855 --- 10.0.0.1 ping statistics --- 00:36:06.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:06.855 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:36:06.855 only one NIC for nvmf test 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:06.855 rmmod nvme_tcp 00:36:06.855 rmmod nvme_fabrics 00:36:06.855 rmmod nvme_keyring 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:06.855 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:06.856 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:36:06.856 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:36:06.856 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:06.856 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:36:06.856 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:06.856 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:06.856 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:06.856 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:06.856 14:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:08.773 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:08.773 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:36:08.773 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:08.774 00:36:08.774 real 0m8.297s 00:36:08.774 user 0m1.796s 00:36:08.774 sys 0m4.527s 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:36:08.774 ************************************ 00:36:08.774 END TEST nvmf_target_multipath 00:36:08.774 ************************************ 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:08.774 ************************************ 00:36:08.774 START TEST nvmf_zcopy 00:36:08.774 ************************************ 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:36:08.774 * Looking for test storage... 00:36:08.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:08.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.774 --rc genhtml_branch_coverage=1 00:36:08.774 --rc genhtml_function_coverage=1 00:36:08.774 --rc genhtml_legend=1 00:36:08.774 --rc geninfo_all_blocks=1 00:36:08.774 --rc geninfo_unexecuted_blocks=1 00:36:08.774 00:36:08.774 ' 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:08.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.774 --rc genhtml_branch_coverage=1 00:36:08.774 --rc genhtml_function_coverage=1 00:36:08.774 --rc genhtml_legend=1 00:36:08.774 --rc geninfo_all_blocks=1 00:36:08.774 --rc geninfo_unexecuted_blocks=1 00:36:08.774 00:36:08.774 ' 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:08.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.774 --rc genhtml_branch_coverage=1 00:36:08.774 --rc genhtml_function_coverage=1 00:36:08.774 --rc genhtml_legend=1 00:36:08.774 --rc geninfo_all_blocks=1 00:36:08.774 --rc geninfo_unexecuted_blocks=1 00:36:08.774 00:36:08.774 ' 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:08.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.774 --rc genhtml_branch_coverage=1 00:36:08.774 --rc genhtml_function_coverage=1 00:36:08.774 --rc genhtml_legend=1 00:36:08.774 --rc geninfo_all_blocks=1 00:36:08.774 --rc geninfo_unexecuted_blocks=1 00:36:08.774 00:36:08.774 ' 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:08.774 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:36:08.775 14:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:15.350 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:15.351 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:15.351 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:15.351 Found net devices under 0000:86:00.0: cvl_0_0 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:15.351 Found net devices under 0000:86:00.1: cvl_0_1 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:15.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:15.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:36:15.351 00:36:15.351 --- 10.0.0.2 ping statistics --- 00:36:15.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.351 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:15.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:15.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:36:15.351 00:36:15.351 --- 10.0.0.1 ping statistics --- 00:36:15.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.351 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1811344 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1811344 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1811344 ']' 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:15.351 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:15.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.352 [2024-11-20 14:55:26.468272] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:15.352 [2024-11-20 14:55:26.469271] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:36:15.352 [2024-11-20 14:55:26.469311] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:15.352 [2024-11-20 14:55:26.548853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:15.352 [2024-11-20 14:55:26.590115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:15.352 [2024-11-20 14:55:26.590152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:15.352 [2024-11-20 14:55:26.590160] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:15.352 [2024-11-20 14:55:26.590166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:15.352 [2024-11-20 14:55:26.590172] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:15.352 [2024-11-20 14:55:26.590707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:15.352 [2024-11-20 14:55:26.659328] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:15.352 [2024-11-20 14:55:26.659542] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.352 [2024-11-20 14:55:26.727437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.352 [2024-11-20 14:55:26.755688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.352 malloc0 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:15.352 { 00:36:15.352 "params": { 00:36:15.352 "name": "Nvme$subsystem", 00:36:15.352 "trtype": "$TEST_TRANSPORT", 00:36:15.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:15.352 "adrfam": "ipv4", 00:36:15.352 "trsvcid": "$NVMF_PORT", 00:36:15.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:15.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:15.352 "hdgst": ${hdgst:-false}, 00:36:15.352 "ddgst": ${ddgst:-false} 00:36:15.352 }, 00:36:15.352 "method": "bdev_nvme_attach_controller" 00:36:15.352 } 00:36:15.352 EOF 00:36:15.352 )") 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:36:15.352 14:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:15.352 "params": { 00:36:15.352 "name": "Nvme1", 00:36:15.352 "trtype": "tcp", 00:36:15.352 "traddr": "10.0.0.2", 00:36:15.352 "adrfam": "ipv4", 00:36:15.352 "trsvcid": "4420", 00:36:15.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:15.352 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:15.352 "hdgst": false, 00:36:15.352 "ddgst": false 00:36:15.352 }, 00:36:15.352 "method": "bdev_nvme_attach_controller" 00:36:15.352 }' 00:36:15.352 [2024-11-20 14:55:26.851595] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:36:15.352 [2024-11-20 14:55:26.851651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1811415 ] 00:36:15.352 [2024-11-20 14:55:26.925544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:15.352 [2024-11-20 14:55:26.967003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:15.352 Running I/O for 10 seconds... 00:36:17.226 8327.00 IOPS, 65.05 MiB/s [2024-11-20T13:55:30.575Z] 8357.00 IOPS, 65.29 MiB/s [2024-11-20T13:55:31.511Z] 8357.33 IOPS, 65.29 MiB/s [2024-11-20T13:55:32.446Z] 8371.25 IOPS, 65.40 MiB/s [2024-11-20T13:55:33.383Z] 8369.80 IOPS, 65.39 MiB/s [2024-11-20T13:55:34.321Z] 8377.17 IOPS, 65.45 MiB/s [2024-11-20T13:55:35.259Z] 8383.43 IOPS, 65.50 MiB/s [2024-11-20T13:55:36.196Z] 8380.00 IOPS, 65.47 MiB/s [2024-11-20T13:55:37.577Z] 8384.67 IOPS, 65.51 MiB/s [2024-11-20T13:55:37.577Z] 8383.50 IOPS, 65.50 MiB/s 00:36:25.619 Latency(us) 00:36:25.619 [2024-11-20T13:55:37.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:25.619 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:36:25.619 Verification LBA range: start 0x0 length 0x1000 00:36:25.619 Nvme1n1 : 10.01 8387.36 65.53 0.00 0.00 15218.26 1503.05 21541.40 00:36:25.619 [2024-11-20T13:55:37.577Z] =================================================================================================================== 00:36:25.619 [2024-11-20T13:55:37.577Z] Total : 8387.36 65.53 0.00 0.00 15218.26 1503.05 21541.40 00:36:25.619 14:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1812988 00:36:25.619 14:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:36:25.619 14:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:25.619 14:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:36:25.619 14:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:36:25.619 14:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:36:25.619 14:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:36:25.619 14:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:25.619 14:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:25.619 { 00:36:25.619 "params": { 00:36:25.619 "name": "Nvme$subsystem", 00:36:25.619 "trtype": "$TEST_TRANSPORT", 00:36:25.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:25.619 "adrfam": "ipv4", 00:36:25.619 "trsvcid": "$NVMF_PORT", 00:36:25.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:25.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:25.619 "hdgst": ${hdgst:-false}, 00:36:25.619 "ddgst": ${ddgst:-false} 00:36:25.619 }, 00:36:25.619 "method": "bdev_nvme_attach_controller" 00:36:25.619 } 00:36:25.619 EOF 00:36:25.619 )") 00:36:25.619 14:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:36:25.619 [2024-11-20 14:55:37.363056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.619 [2024-11-20 14:55:37.363089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.619 14:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:36:25.619 14:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:36:25.619 14:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:25.619 "params": { 00:36:25.619 "name": "Nvme1", 00:36:25.619 "trtype": "tcp", 00:36:25.619 "traddr": "10.0.0.2", 00:36:25.619 "adrfam": "ipv4", 00:36:25.619 "trsvcid": "4420", 00:36:25.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:25.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:25.619 "hdgst": false, 00:36:25.619 "ddgst": false 00:36:25.619 }, 00:36:25.619 "method": "bdev_nvme_attach_controller" 00:36:25.619 }' 00:36:25.619 [2024-11-20 14:55:37.375016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.619 [2024-11-20 14:55:37.375029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.619 [2024-11-20 14:55:37.387013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.619 [2024-11-20 14:55:37.387022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.619 [2024-11-20 14:55:37.399013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.619 [2024-11-20 14:55:37.399022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.619 [2024-11-20 14:55:37.403046] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:36:25.619 [2024-11-20 14:55:37.403087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1812988 ] 00:36:25.619 [2024-11-20 14:55:37.411014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.619 [2024-11-20 14:55:37.411025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.619 [2024-11-20 14:55:37.423027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.619 [2024-11-20 14:55:37.423037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.619 [2024-11-20 14:55:37.435013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.619 [2024-11-20 14:55:37.435023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.619 [2024-11-20 14:55:37.447011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.619 [2024-11-20 14:55:37.447020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.619 [2024-11-20 14:55:37.459013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.619 [2024-11-20 14:55:37.459025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.619 [2024-11-20 14:55:37.471013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.619 [2024-11-20 14:55:37.471023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.619 [2024-11-20 14:55:37.477815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:25.619 [2024-11-20 14:55:37.483015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.619 [2024-11-20 14:55:37.483027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.619 [2024-11-20 14:55:37.495016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.619 [2024-11-20 14:55:37.495031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.619 [2024-11-20 14:55:37.507015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.619 [2024-11-20 14:55:37.507025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.619 [2024-11-20 14:55:37.519013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.619 [2024-11-20 14:55:37.519027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.619 [2024-11-20 14:55:37.519868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:25.619 [2024-11-20 14:55:37.531021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.619 [2024-11-20 14:55:37.531036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.619 [2024-11-20 14:55:37.543018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.619 [2024-11-20 14:55:37.543036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.619 [2024-11-20 14:55:37.555014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.619 [2024-11-20 14:55:37.555027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.619 [2024-11-20 14:55:37.567012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.620 [2024-11-20 14:55:37.567024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.878 [2024-11-20 14:55:37.579017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.878 [2024-11-20 14:55:37.579030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.878 [2024-11-20 14:55:37.591033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.878 [2024-11-20 14:55:37.591048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.878 [2024-11-20 14:55:37.603022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.878 [2024-11-20 14:55:37.603037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.878 [2024-11-20 14:55:37.615026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.878 [2024-11-20 14:55:37.615046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.878 [2024-11-20 14:55:37.627017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.878 [2024-11-20 14:55:37.627032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.878 [2024-11-20 14:55:37.639016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.878 [2024-11-20 14:55:37.639030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.878 [2024-11-20 14:55:37.651012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.878 [2024-11-20 14:55:37.651024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.878 [2024-11-20 14:55:37.663010] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.878 [2024-11-20 14:55:37.663021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.878 [2024-11-20 14:55:37.675017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.878 [2024-11-20 14:55:37.675035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.878 [2024-11-20 14:55:37.687019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.878 [2024-11-20 14:55:37.687034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.879 [2024-11-20 14:55:37.699020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.879 [2024-11-20 14:55:37.699036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.879 [2024-11-20 14:55:37.711020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.879 [2024-11-20 14:55:37.711037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.879 Running I/O for 5 seconds... 00:36:25.879 [2024-11-20 14:55:37.724542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.879 [2024-11-20 14:55:37.724561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.879 [2024-11-20 14:55:37.739632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.879 [2024-11-20 14:55:37.739650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.879 [2024-11-20 14:55:37.755203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.879 [2024-11-20 14:55:37.755223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.879 [2024-11-20 14:55:37.768424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.879 [2024-11-20 14:55:37.768443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.879 [2024-11-20 14:55:37.779041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.879 [2024-11-20 14:55:37.779060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.879 [2024-11-20 14:55:37.792856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.879 [2024-11-20 14:55:37.792875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.879 [2024-11-20 14:55:37.808333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.879 [2024-11-20 14:55:37.808352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:25.879 [2024-11-20 14:55:37.823926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:25.879 [2024-11-20 14:55:37.823944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.138 [2024-11-20 14:55:37.839296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.138 [2024-11-20 14:55:37.839315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.138 [2024-11-20 14:55:37.852933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.138 [2024-11-20 14:55:37.852958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.138 [2024-11-20 14:55:37.868491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.138 [2024-11-20 14:55:37.868510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.138 [2024-11-20 14:55:37.883925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.138 [2024-11-20 14:55:37.883944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.138 [2024-11-20 14:55:37.899195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.138 [2024-11-20 14:55:37.899214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.138 [2024-11-20 14:55:37.910779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.138 [2024-11-20 14:55:37.910798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.138 [2024-11-20 14:55:37.924785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.138 [2024-11-20 14:55:37.924804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.138 [2024-11-20 14:55:37.940460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.138 [2024-11-20 14:55:37.940479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.138 [2024-11-20 14:55:37.955346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.138 [2024-11-20 14:55:37.955364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.138 [2024-11-20 14:55:37.967893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.138 [2024-11-20 14:55:37.967910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.138 [2024-11-20 14:55:37.983342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.138 [2024-11-20 14:55:37.983360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.138 [2024-11-20 14:55:37.994929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.138 [2024-11-20 14:55:37.994954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.138 [2024-11-20 14:55:38.009532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.138 [2024-11-20 14:55:38.009550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.138 [2024-11-20 14:55:38.025194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.138 [2024-11-20 14:55:38.025213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.138 [2024-11-20 14:55:38.040726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.138 [2024-11-20 14:55:38.040746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.138 [2024-11-20 14:55:38.056052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.138 [2024-11-20 14:55:38.056071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.138 [2024-11-20 14:55:38.071383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.138 [2024-11-20 14:55:38.071405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.138 [2024-11-20 14:55:38.087598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.138 [2024-11-20 14:55:38.087616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.398 [2024-11-20 14:55:38.103631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.398 [2024-11-20 14:55:38.103649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.398 [2024-11-20 14:55:38.119019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.398 [2024-11-20 14:55:38.119038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.398 [2024-11-20 14:55:38.131317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.398 [2024-11-20 14:55:38.131335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.398 [2024-11-20 14:55:38.145298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.398 [2024-11-20 14:55:38.145317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.398 [2024-11-20 14:55:38.160466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.398 [2024-11-20 14:55:38.160485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.398 [2024-11-20 14:55:38.175728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.398 [2024-11-20 14:55:38.175745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.398 [2024-11-20 14:55:38.191367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.398 [2024-11-20 14:55:38.191386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.398 [2024-11-20 14:55:38.205083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.398 [2024-11-20 14:55:38.205101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.398 [2024-11-20 14:55:38.220734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.398 [2024-11-20 14:55:38.220753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.398 [2024-11-20 14:55:38.236209] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.398 [2024-11-20 14:55:38.236226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.398 [2024-11-20 14:55:38.251664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.398 [2024-11-20 14:55:38.251682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.398 [2024-11-20 14:55:38.266794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.398 [2024-11-20 14:55:38.266812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.398 [2024-11-20 14:55:38.279853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.398 [2024-11-20 14:55:38.279870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.398 [2024-11-20 14:55:38.295464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.398 [2024-11-20 14:55:38.295482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.398 [2024-11-20 14:55:38.310848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.398 [2024-11-20 14:55:38.310866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.398 [2024-11-20 14:55:38.324798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.398 [2024-11-20 14:55:38.324817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.398 [2024-11-20 14:55:38.339867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.398 [2024-11-20 14:55:38.339896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.657 [2024-11-20 14:55:38.354680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.657 [2024-11-20 14:55:38.354703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.657 [2024-11-20 14:55:38.368633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.657 [2024-11-20 14:55:38.368651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.657 [2024-11-20 14:55:38.384038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.657 [2024-11-20 14:55:38.384056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.657 [2024-11-20 14:55:38.399186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.657 [2024-11-20 14:55:38.399205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.657 [2024-11-20 14:55:38.410489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.657 [2024-11-20 14:55:38.410507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.657 [2024-11-20 14:55:38.424875] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.657 [2024-11-20 14:55:38.424894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.657 [2024-11-20 14:55:38.440321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.657 [2024-11-20 14:55:38.440339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.657 [2024-11-20 14:55:38.455262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.657 [2024-11-20 14:55:38.455281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.657 [2024-11-20 14:55:38.467148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.657 [2024-11-20 14:55:38.467167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.657 [2024-11-20 14:55:38.481032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.657 [2024-11-20 14:55:38.481051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.657 [2024-11-20 14:55:38.496580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.657 [2024-11-20 14:55:38.496598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.657 [2024-11-20 14:55:38.511673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.657 [2024-11-20 14:55:38.511692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.657 [2024-11-20 14:55:38.527892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.657 [2024-11-20 14:55:38.527912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.657 [2024-11-20 14:55:38.543049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.657 [2024-11-20 14:55:38.543068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.657 [2024-11-20 14:55:38.554828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.657 [2024-11-20 14:55:38.554847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.657 [2024-11-20 14:55:38.569621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.657 [2024-11-20 14:55:38.569639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.657 [2024-11-20 14:55:38.584890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.657 [2024-11-20 14:55:38.584908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.657 [2024-11-20 14:55:38.600248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.657 [2024-11-20 14:55:38.600265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.916 [2024-11-20 14:55:38.615098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.916 [2024-11-20 14:55:38.615117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.916 [2024-11-20 14:55:38.628538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.916 [2024-11-20 14:55:38.628564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.916 [2024-11-20 14:55:38.644283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.916 [2024-11-20 14:55:38.644302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.916 [2024-11-20 14:55:38.659670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.916 [2024-11-20 14:55:38.659688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.916 [2024-11-20 14:55:38.675339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.916 [2024-11-20 14:55:38.675358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.916 [2024-11-20 14:55:38.689017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.916 [2024-11-20 14:55:38.689036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.916 [2024-11-20 14:55:38.703999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.916 [2024-11-20 14:55:38.704016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.916 16116.00 IOPS, 125.91 MiB/s [2024-11-20T13:55:38.874Z] [2024-11-20 14:55:38.718929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.916 [2024-11-20 14:55:38.718958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.916 [2024-11-20 14:55:38.730500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.916 [2024-11-20 14:55:38.730518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.916 [2024-11-20 14:55:38.744976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.916 [2024-11-20 14:55:38.745010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.916 [2024-11-20 14:55:38.760223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.916 [2024-11-20 14:55:38.760241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.916 [2024-11-20 14:55:38.775018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.916 [2024-11-20 14:55:38.775036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.916 [2024-11-20 14:55:38.786454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.916 [2024-11-20 14:55:38.786472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.916 [2024-11-20 14:55:38.800934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.916 [2024-11-20 14:55:38.800961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.916 [2024-11-20 14:55:38.816188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.916 [2024-11-20 14:55:38.816206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.916 [2024-11-20 14:55:38.831232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.916 [2024-11-20 14:55:38.831250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.916 [2024-11-20 14:55:38.842653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.916 [2024-11-20 14:55:38.842671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:26.916 [2024-11-20 14:55:38.856942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:26.916 [2024-11-20 14:55:38.856966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.174 [2024-11-20 14:55:38.872167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.174 [2024-11-20 14:55:38.872185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.174 [2024-11-20 14:55:38.887154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.174 [2024-11-20 14:55:38.887173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.174 [2024-11-20 14:55:38.898738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.174 [2024-11-20 14:55:38.898756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.174 [2024-11-20 14:55:38.912919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.174 [2024-11-20 14:55:38.912937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.174 [2024-11-20 14:55:38.928228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.174 [2024-11-20 14:55:38.928247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.174 [2024-11-20 14:55:38.943707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.174 [2024-11-20 14:55:38.943725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.174 [2024-11-20 14:55:38.958759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.174 [2024-11-20 14:55:38.958777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.174 [2024-11-20 14:55:38.970211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.174 [2024-11-20 14:55:38.970228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.174 [2024-11-20 14:55:38.985216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.174 [2024-11-20 14:55:38.985234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.174 [2024-11-20 14:55:39.000309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.174 [2024-11-20 14:55:39.000327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.174 [2024-11-20 14:55:39.015362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.174 [2024-11-20 14:55:39.015379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.174 [2024-11-20 14:55:39.030893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.174 [2024-11-20 14:55:39.030912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.174 [2024-11-20 14:55:39.042060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.174 [2024-11-20 14:55:39.042079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.174 [2024-11-20 14:55:39.057296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.174 [2024-11-20 14:55:39.057315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.174 [2024-11-20 14:55:39.072360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.174 [2024-11-20 14:55:39.072379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.174 [2024-11-20 14:55:39.087212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.174 [2024-11-20 14:55:39.087232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.174 [2024-11-20 14:55:39.100945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.174 [2024-11-20 14:55:39.100971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.174 [2024-11-20 14:55:39.116224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.174 [2024-11-20 14:55:39.116242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.432 [2024-11-20 14:55:39.131507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.432 [2024-11-20 14:55:39.131525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.432 [2024-11-20 14:55:39.146906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.432 [2024-11-20 14:55:39.146925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.432 [2024-11-20 14:55:39.161542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.432 [2024-11-20 14:55:39.161561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.432 [2024-11-20 14:55:39.176852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.432 [2024-11-20 14:55:39.176871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.432 [2024-11-20 14:55:39.192075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.432 [2024-11-20 14:55:39.192093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.432 [2024-11-20 14:55:39.207417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.432 [2024-11-20 14:55:39.207434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.432 [2024-11-20 14:55:39.223290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.432 [2024-11-20 14:55:39.223310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.432 [2024-11-20 14:55:39.236865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.432 [2024-11-20 14:55:39.236885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.432 [2024-11-20 14:55:39.252455] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.432 [2024-11-20 14:55:39.252474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.432 [2024-11-20 14:55:39.267483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.432 [2024-11-20 14:55:39.267501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.432 [2024-11-20 14:55:39.283782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.432 [2024-11-20 14:55:39.283802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.432 [2024-11-20 14:55:39.299490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.432 [2024-11-20 14:55:39.299509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.432 [2024-11-20 14:55:39.314621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.432 [2024-11-20 14:55:39.314640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.432 [2024-11-20 14:55:39.328188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.432 [2024-11-20 14:55:39.328206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.432 [2024-11-20 14:55:39.343695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.432 [2024-11-20 14:55:39.343714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.432 [2024-11-20 14:55:39.359804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.432 [2024-11-20 14:55:39.359822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.432 [2024-11-20 14:55:39.375039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.432 [2024-11-20 14:55:39.375058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.432 [2024-11-20 14:55:39.386851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.432 [2024-11-20 14:55:39.386870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.690 [2024-11-20 14:55:39.401090] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.690 [2024-11-20 14:55:39.401108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.690 [2024-11-20 14:55:39.416400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.690 [2024-11-20 14:55:39.416420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.690 [2024-11-20 14:55:39.431477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.690 [2024-11-20 14:55:39.431495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.690 [2024-11-20 14:55:39.446914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.690 [2024-11-20 14:55:39.446933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.690 [2024-11-20 14:55:39.461025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.690 [2024-11-20 14:55:39.461044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.690 [2024-11-20 14:55:39.476613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.690 [2024-11-20 14:55:39.476633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.690 [2024-11-20 14:55:39.491698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.690 [2024-11-20 14:55:39.491717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.690 [2024-11-20 14:55:39.506804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.690 [2024-11-20 14:55:39.506823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.690 [2024-11-20 14:55:39.519285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.690 [2024-11-20 14:55:39.519303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.690 [2024-11-20 14:55:39.532884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.690 [2024-11-20 14:55:39.532902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.690 [2024-11-20 14:55:39.547776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.690 [2024-11-20 14:55:39.547795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.690 [2024-11-20 14:55:39.563270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.690 [2024-11-20 14:55:39.563289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.690 [2024-11-20 14:55:39.574867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.690 [2024-11-20 14:55:39.574886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.690 [2024-11-20 14:55:39.588762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.690 [2024-11-20 14:55:39.588781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.690 [2024-11-20 14:55:39.603969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.690 [2024-11-20 14:55:39.603988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.690 [2024-11-20 14:55:39.618439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.690 [2024-11-20 14:55:39.618458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.690 [2024-11-20 14:55:39.632588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.690 [2024-11-20 14:55:39.632607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.951 [2024-11-20 14:55:39.647796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.951 [2024-11-20 14:55:39.647814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.951 [2024-11-20 14:55:39.663288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.951 [2024-11-20 14:55:39.663306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.951 [2024-11-20 14:55:39.674942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.951 [2024-11-20 14:55:39.674968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.951 [2024-11-20 14:55:39.689055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.951 [2024-11-20 14:55:39.689073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.951 [2024-11-20 14:55:39.704168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.951 [2024-11-20 14:55:39.704186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.951 [2024-11-20 14:55:39.719287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.951 [2024-11-20 14:55:39.719306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.951 16207.00 IOPS, 126.62 MiB/s [2024-11-20T13:55:39.909Z] [2024-11-20 14:55:39.732225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.951 [2024-11-20 14:55:39.732243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.951 [2024-11-20 14:55:39.747192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.951 [2024-11-20 14:55:39.747211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.951 [2024-11-20 14:55:39.758238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.951 [2024-11-20 14:55:39.758257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.951 [2024-11-20 14:55:39.773497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.951 [2024-11-20 14:55:39.773516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.951 [2024-11-20 14:55:39.788453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.951 [2024-11-20 14:55:39.788471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.951 [2024-11-20 14:55:39.803719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.951 [2024-11-20 14:55:39.803737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.951 [2024-11-20 14:55:39.818993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.951 [2024-11-20 14:55:39.819011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.951 [2024-11-20 14:55:39.830520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.951 [2024-11-20 14:55:39.830538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.951 [2024-11-20 14:55:39.845275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.951 [2024-11-20 14:55:39.845293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.951 [2024-11-20 14:55:39.860006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.951 [2024-11-20 14:55:39.860024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.951 [2024-11-20 14:55:39.875044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.951 [2024-11-20 14:55:39.875062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.951 [2024-11-20 14:55:39.886479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.951 [2024-11-20 14:55:39.886497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:27.951 [2024-11-20 14:55:39.900788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:27.951 [2024-11-20 14:55:39.900806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.211 [2024-11-20 14:55:39.916153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.211 [2024-11-20 14:55:39.916170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.211 [2024-11-20 14:55:39.931375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.211 [2024-11-20 14:55:39.931393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.211 [2024-11-20 14:55:39.946805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.211 [2024-11-20 14:55:39.946824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.211 [2024-11-20 14:55:39.960608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.211 [2024-11-20 14:55:39.960626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.211 [2024-11-20 14:55:39.975654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.211 [2024-11-20 14:55:39.975672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.211 [2024-11-20 14:55:39.988686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.211 [2024-11-20 14:55:39.988709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.211 [2024-11-20 14:55:40.004495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.211 [2024-11-20 14:55:40.004517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.211 [2024-11-20 14:55:40.019453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.211 [2024-11-20 14:55:40.019471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.211 [2024-11-20 14:55:40.035728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.211 [2024-11-20 14:55:40.035748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.211 [2024-11-20 14:55:40.051065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.211 [2024-11-20 14:55:40.051083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.211 [2024-11-20 14:55:40.064299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.211 [2024-11-20 14:55:40.064336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.211 [2024-11-20 14:55:40.080196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.211 [2024-11-20 14:55:40.080216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.211 [2024-11-20 14:55:40.095676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.211 [2024-11-20 14:55:40.095694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.211 [2024-11-20 14:55:40.111449] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.211 [2024-11-20 14:55:40.111467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.211 [2024-11-20 14:55:40.126563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.211 [2024-11-20 14:55:40.126581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.211 [2024-11-20 14:55:40.141633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.211 [2024-11-20 14:55:40.141651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.211 [2024-11-20 14:55:40.156221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.211 [2024-11-20 14:55:40.156239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.470 [2024-11-20 14:55:40.171660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.471 [2024-11-20 14:55:40.171678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.471 [2024-11-20 14:55:40.186715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.471 [2024-11-20 14:55:40.186733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.471 [2024-11-20 14:55:40.200058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.471 [2024-11-20 14:55:40.200076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.471 [2024-11-20 14:55:40.215648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.471 [2024-11-20 14:55:40.215666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.471 [2024-11-20 14:55:40.230683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.471 [2024-11-20 14:55:40.230702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.471 [2024-11-20 14:55:40.244774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.471 [2024-11-20 14:55:40.244792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.471 [2024-11-20 14:55:40.260217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.471 [2024-11-20 14:55:40.260236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.471 [2024-11-20 14:55:40.275442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.471 [2024-11-20 14:55:40.275465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.471 [2024-11-20 14:55:40.290559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.471 [2024-11-20 14:55:40.290578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.471 [2024-11-20 14:55:40.304701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.471 [2024-11-20 14:55:40.304719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.471 [2024-11-20 14:55:40.319994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.471 [2024-11-20 14:55:40.320012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.471 [2024-11-20 14:55:40.334985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.471 [2024-11-20 14:55:40.335003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.471 [2024-11-20 14:55:40.345841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.471 [2024-11-20 14:55:40.345859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.471 [2024-11-20 14:55:40.361372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.471 [2024-11-20 14:55:40.361390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.471 [2024-11-20 14:55:40.376433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.471 [2024-11-20 14:55:40.376452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.471 [2024-11-20 14:55:40.391453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.471 [2024-11-20 14:55:40.391471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.471 [2024-11-20 14:55:40.406400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.471 [2024-11-20 14:55:40.406418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.471 [2024-11-20 14:55:40.420474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.471 [2024-11-20 14:55:40.420492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.730 [2024-11-20 14:55:40.435417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.730 [2024-11-20 14:55:40.435434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.730 [2024-11-20 14:55:40.446716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.730 [2024-11-20 14:55:40.446734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.730 [2024-11-20 14:55:40.461198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.730 [2024-11-20 14:55:40.461216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.730 [2024-11-20 14:55:40.476653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.730 [2024-11-20 14:55:40.476670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.730 [2024-11-20 14:55:40.491866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.730 [2024-11-20 14:55:40.491885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.730 [2024-11-20 14:55:40.506960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.730 [2024-11-20 14:55:40.506979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.730 [2024-11-20 14:55:40.518075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.730 [2024-11-20 14:55:40.518094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.730 [2024-11-20 14:55:40.533503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.730 [2024-11-20 14:55:40.533523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.730 [2024-11-20 14:55:40.548420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.730 [2024-11-20 14:55:40.548444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.730 [2024-11-20 14:55:40.563615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.730 [2024-11-20 14:55:40.563635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.730 [2024-11-20 14:55:40.578722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.730 [2024-11-20 14:55:40.578742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.730 [2024-11-20 14:55:40.593007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.730 [2024-11-20 14:55:40.593030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.730 [2024-11-20 14:55:40.608307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.730 [2024-11-20 14:55:40.608325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.730 [2024-11-20 14:55:40.623739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.730 [2024-11-20 14:55:40.623757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.730 [2024-11-20 14:55:40.639086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.730 [2024-11-20 14:55:40.639112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.730 [2024-11-20 14:55:40.650555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.730 [2024-11-20 14:55:40.650573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.730 [2024-11-20 14:55:40.665149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.730 [2024-11-20 14:55:40.665168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.730 [2024-11-20 14:55:40.680193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.730 [2024-11-20 14:55:40.680212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.989 [2024-11-20 14:55:40.695104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.989 [2024-11-20 14:55:40.695123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.989 [2024-11-20 14:55:40.707730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.989 [2024-11-20 14:55:40.707749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.989 16219.33 IOPS, 126.71 MiB/s [2024-11-20T13:55:40.947Z] [2024-11-20 14:55:40.723620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.989 [2024-11-20 14:55:40.723639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.989 [2024-11-20 14:55:40.739101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.989 [2024-11-20 14:55:40.739121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.989 [2024-11-20 14:55:40.750553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.989 [2024-11-20 14:55:40.750573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.989 [2024-11-20 14:55:40.764826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.989 [2024-11-20 14:55:40.764844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.989 [2024-11-20 14:55:40.779866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.989 [2024-11-20 14:55:40.779885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.989 [2024-11-20 14:55:40.795570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.989 [2024-11-20 14:55:40.795589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.990 [2024-11-20 14:55:40.810642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.990 [2024-11-20 14:55:40.810661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.990 [2024-11-20 14:55:40.824141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.990 [2024-11-20 14:55:40.824159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.990 [2024-11-20 14:55:40.838862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.990 [2024-11-20 14:55:40.838881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.990 [2024-11-20 14:55:40.850542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.990 [2024-11-20 14:55:40.850560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.990 [2024-11-20 14:55:40.865226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.990 [2024-11-20 14:55:40.865245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.990 [2024-11-20 14:55:40.880684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.990 [2024-11-20 14:55:40.880703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.990 [2024-11-20 14:55:40.895629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.990 [2024-11-20 14:55:40.895647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.990 [2024-11-20 14:55:40.911459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.990 [2024-11-20 14:55:40.911477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.990 [2024-11-20 14:55:40.927613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.990 [2024-11-20 14:55:40.927632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:28.990 [2024-11-20 14:55:40.943496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:28.990 [2024-11-20 14:55:40.943515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.249 [2024-11-20 14:55:40.959374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.249 [2024-11-20 14:55:40.959392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.249 [2024-11-20 14:55:40.974706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.249 [2024-11-20 14:55:40.974725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.249 [2024-11-20 14:55:40.989183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.249 [2024-11-20 14:55:40.989202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.249 [2024-11-20 14:55:41.004448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.249 [2024-11-20 14:55:41.004466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.249 [2024-11-20 14:55:41.019324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.249 [2024-11-20 14:55:41.019341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.249 [2024-11-20 14:55:41.031727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.249 [2024-11-20 14:55:41.031745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.249 [2024-11-20 14:55:41.045128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.249 [2024-11-20 14:55:41.045147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.249 [2024-11-20 14:55:41.060435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.249 [2024-11-20 14:55:41.060453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.249 [2024-11-20 14:55:41.075265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.249 [2024-11-20 14:55:41.075283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.249 [2024-11-20 14:55:41.086697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.249 [2024-11-20 14:55:41.086715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.249 [2024-11-20 14:55:41.101286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.249 [2024-11-20 14:55:41.101304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.249 [2024-11-20 14:55:41.116570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.249 [2024-11-20 14:55:41.116588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.249 [2024-11-20 14:55:41.131543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.249 [2024-11-20 14:55:41.131561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.249 [2024-11-20 14:55:41.146724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.249 [2024-11-20 14:55:41.146743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.249 [2024-11-20 14:55:41.157645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.249 [2024-11-20 14:55:41.157664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.249 [2024-11-20 14:55:41.172834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.249 [2024-11-20 14:55:41.172853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.249 [2024-11-20 14:55:41.187501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.249 [2024-11-20 14:55:41.187518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.249 [2024-11-20 14:55:41.202541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.249 [2024-11-20 14:55:41.202560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.508 [2024-11-20 14:55:41.215666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.508 [2024-11-20 14:55:41.215683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.508 [2024-11-20 14:55:41.230995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.508 [2024-11-20 14:55:41.231014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.508 [2024-11-20 14:55:41.244642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.508 [2024-11-20 14:55:41.244659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.508 [2024-11-20 14:55:41.260077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.508 [2024-11-20 14:55:41.260094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.508 [2024-11-20 14:55:41.275114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.508 [2024-11-20 14:55:41.275133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.508 [2024-11-20 14:55:41.288733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.508 [2024-11-20 14:55:41.288751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.508 [2024-11-20 14:55:41.304172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.508 [2024-11-20 14:55:41.304190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.508 [2024-11-20 14:55:41.319141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.508 [2024-11-20 14:55:41.319159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.508 [2024-11-20 14:55:41.329960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.508 [2024-11-20 14:55:41.329978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.508 [2024-11-20 14:55:41.344726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.508 [2024-11-20 14:55:41.344745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.508 [2024-11-20 14:55:41.360056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.508 [2024-11-20 14:55:41.360078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.508 [2024-11-20 14:55:41.375127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.508 [2024-11-20 14:55:41.375146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.508 [2024-11-20 14:55:41.387439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.508 [2024-11-20 14:55:41.387456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.508 [2024-11-20 14:55:41.401077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.508 [2024-11-20 14:55:41.401096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.508 [2024-11-20 14:55:41.416752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.508 [2024-11-20 14:55:41.416771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.508 [2024-11-20 14:55:41.431591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.508 [2024-11-20 14:55:41.431609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.508 [2024-11-20 14:55:41.447498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.508 [2024-11-20 14:55:41.447516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.508 [2024-11-20 14:55:41.463768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.509 [2024-11-20 14:55:41.463786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.768 [2024-11-20 14:55:41.479134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.768 [2024-11-20 14:55:41.479153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.768 [2024-11-20 14:55:41.490272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.768 [2024-11-20 14:55:41.490290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.768 [2024-11-20 14:55:41.505713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.768 [2024-11-20 14:55:41.505732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.768 [2024-11-20 14:55:41.520988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.768 [2024-11-20 14:55:41.521007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.768 [2024-11-20 14:55:41.536106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.768 [2024-11-20 14:55:41.536124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.768 [2024-11-20 14:55:41.551801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.768 [2024-11-20 14:55:41.551819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.768 [2024-11-20 14:55:41.567033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.768 [2024-11-20 14:55:41.567052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.768 [2024-11-20 14:55:41.580823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.768 [2024-11-20 14:55:41.580840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.768 [2024-11-20 14:55:41.596028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.768 [2024-11-20 14:55:41.596047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.768 [2024-11-20 14:55:41.611182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.768 [2024-11-20 14:55:41.611202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.768 [2024-11-20 14:55:41.624745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.768 [2024-11-20 14:55:41.624763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.768 [2024-11-20 14:55:41.639909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.768 [2024-11-20 14:55:41.639931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.768 [2024-11-20 14:55:41.654623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.768 [2024-11-20 14:55:41.654642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.768 [2024-11-20 14:55:41.668011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.768 [2024-11-20 14:55:41.668029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.768 [2024-11-20 14:55:41.683187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.768 [2024-11-20 14:55:41.683206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.768 [2024-11-20 14:55:41.696859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.768 [2024-11-20 14:55:41.696877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:29.768 [2024-11-20 14:55:41.711726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:29.768 [2024-11-20 14:55:41.711743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.028 16249.75 IOPS, 126.95 MiB/s [2024-11-20T13:55:41.986Z] [2024-11-20 14:55:41.727150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.028 [2024-11-20 14:55:41.727168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.028 [2024-11-20 14:55:41.741049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.028 [2024-11-20 14:55:41.741068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.028 [2024-11-20 14:55:41.756673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.028 [2024-11-20 14:55:41.756692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.028 [2024-11-20 14:55:41.771168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.028 [2024-11-20 14:55:41.771187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.028 [2024-11-20 14:55:41.781817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.028 [2024-11-20 14:55:41.781835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.028 [2024-11-20 14:55:41.797115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.028 [2024-11-20 14:55:41.797133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.028 [2024-11-20 14:55:41.812173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.028 [2024-11-20 14:55:41.812191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.028 [2024-11-20 14:55:41.822962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.028 [2024-11-20 14:55:41.822980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.028 [2024-11-20 14:55:41.837323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.028 [2024-11-20 14:55:41.837341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.028 [2024-11-20 14:55:41.852900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.028 [2024-11-20 14:55:41.852918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.028 [2024-11-20 14:55:41.868086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.028 [2024-11-20 14:55:41.868105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.028 [2024-11-20 14:55:41.878399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.028 [2024-11-20 14:55:41.878416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.028 [2024-11-20 14:55:41.892891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.028 [2024-11-20 14:55:41.892909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.028 [2024-11-20 14:55:41.908360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.028 [2024-11-20 14:55:41.908383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.028 [2024-11-20 14:55:41.923661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.028 [2024-11-20 14:55:41.923678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.028 [2024-11-20 14:55:41.939154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.028 [2024-11-20 14:55:41.939172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.028 [2024-11-20 14:55:41.953101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.028 [2024-11-20 14:55:41.953120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.028 [2024-11-20 14:55:41.968526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.028 [2024-11-20 14:55:41.968545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.028 [2024-11-20 14:55:41.983720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.028 [2024-11-20 14:55:41.983738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.288 [2024-11-20 14:55:41.999244] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.288 [2024-11-20 14:55:41.999263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.288 [2024-11-20 14:55:42.010582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.288 [2024-11-20 14:55:42.010600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.288 [2024-11-20 14:55:42.025130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.288 [2024-11-20 14:55:42.025149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.288 [2024-11-20 14:55:42.040510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.288 [2024-11-20 14:55:42.040529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.288 [2024-11-20 14:55:42.055424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.288 [2024-11-20 14:55:42.055442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.288 [2024-11-20 14:55:42.067990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.288 [2024-11-20 14:55:42.068012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.288 [2024-11-20 14:55:42.083134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.288 [2024-11-20 14:55:42.083154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.288 [2024-11-20 14:55:42.095581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.288 [2024-11-20 14:55:42.095600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.288 [2024-11-20 14:55:42.109172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.288 [2024-11-20 14:55:42.109190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.288 [2024-11-20 14:55:42.124322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.288 [2024-11-20 14:55:42.124340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.288 [2024-11-20 14:55:42.139378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.288 [2024-11-20 14:55:42.139396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.288 [2024-11-20 14:55:42.155367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.288 [2024-11-20 14:55:42.155389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.288 [2024-11-20 14:55:42.171277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.288 [2024-11-20 14:55:42.171296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.288 [2024-11-20 14:55:42.184544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.288 [2024-11-20 14:55:42.184563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.288 [2024-11-20 14:55:42.199584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.288 [2024-11-20 14:55:42.199603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.288 [2024-11-20 14:55:42.215011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.288 [2024-11-20 14:55:42.215031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.288 [2024-11-20 14:55:42.229781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.288 [2024-11-20 14:55:42.229800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.547 [2024-11-20 14:55:42.244852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.547 [2024-11-20 14:55:42.244872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.547 [2024-11-20 14:55:42.259911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.547 [2024-11-20 14:55:42.259929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.547 [2024-11-20 14:55:42.274994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.547 [2024-11-20 14:55:42.275012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.547 [2024-11-20 14:55:42.285720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.547 [2024-11-20 14:55:42.285739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.547 [2024-11-20 14:55:42.300737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.547 [2024-11-20 14:55:42.300756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.547 [2024-11-20 14:55:42.315903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.547 [2024-11-20 14:55:42.315921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.547 [2024-11-20 14:55:42.331551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.547 [2024-11-20 14:55:42.331569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.547 [2024-11-20 14:55:42.345133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.547 [2024-11-20 14:55:42.345152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.547 [2024-11-20 14:55:42.360370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.547 [2024-11-20 14:55:42.360389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.548 [2024-11-20 14:55:42.375133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.548 [2024-11-20 14:55:42.375152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.548 [2024-11-20 14:55:42.388388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.548 [2024-11-20 14:55:42.388406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.548 [2024-11-20 14:55:42.399630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.548 [2024-11-20 14:55:42.399648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.548 [2024-11-20 14:55:42.415568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.548 [2024-11-20 14:55:42.415586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.548 [2024-11-20 14:55:42.427087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.548 [2024-11-20 14:55:42.427105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.548 [2024-11-20 14:55:42.440688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.548 [2024-11-20 14:55:42.440706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.548 [2024-11-20 14:55:42.455788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.548 [2024-11-20 14:55:42.455806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.548 [2024-11-20 14:55:42.471769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.548 [2024-11-20 14:55:42.471787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.548 [2024-11-20 14:55:42.486688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.548 [2024-11-20 14:55:42.486707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.548 [2024-11-20 14:55:42.501172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.548 [2024-11-20 14:55:42.501190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.807 [2024-11-20 14:55:42.516703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.807 [2024-11-20 14:55:42.516722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.807 [2024-11-20 14:55:42.531480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.807 [2024-11-20 14:55:42.531497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.807 [2024-11-20 14:55:42.546832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.807 [2024-11-20 14:55:42.546850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.807 [2024-11-20 14:55:42.561654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.807 [2024-11-20 14:55:42.561672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.807 [2024-11-20 14:55:42.576388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.807 [2024-11-20 14:55:42.576406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.807 [2024-11-20 14:55:42.591574] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.807 [2024-11-20 14:55:42.591593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.807 [2024-11-20 14:55:42.607740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.807 [2024-11-20 14:55:42.607757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.807 [2024-11-20 14:55:42.622823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.807 [2024-11-20 14:55:42.622843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.807 [2024-11-20 14:55:42.637344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.807 [2024-11-20 14:55:42.637363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.807 [2024-11-20 14:55:42.652672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.807 [2024-11-20 14:55:42.652691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.807 [2024-11-20 14:55:42.667561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.807 [2024-11-20 14:55:42.667579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.807 [2024-11-20 14:55:42.682519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.807 [2024-11-20 14:55:42.682538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.807 [2024-11-20 14:55:42.694347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.807 [2024-11-20 14:55:42.694366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.807 [2024-11-20 14:55:42.708769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.807 [2024-11-20 14:55:42.708787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.807 [2024-11-20 14:55:42.724017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.807 [2024-11-20 14:55:42.724035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.807 16259.60 IOPS, 127.03 MiB/s 00:36:30.808 Latency(us) 00:36:30.808 [2024-11-20T13:55:42.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.808 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:36:30.808 Nvme1n1 : 5.01 16268.02 127.09 0.00 0.00 7861.31 2065.81 13848.04 00:36:30.808 [2024-11-20T13:55:42.766Z] =================================================================================================================== 00:36:30.808 [2024-11-20T13:55:42.766Z] Total : 16268.02 127.09 0.00 0.00 7861.31 2065.81 13848.04 00:36:30.808 [2024-11-20 14:55:42.735019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.808 [2024-11-20 14:55:42.735037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.808 [2024-11-20 14:55:42.747020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.808 [2024-11-20 14:55:42.747036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:30.808 [2024-11-20 14:55:42.759025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:30.808 [2024-11-20 14:55:42.759038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.067 [2024-11-20 14:55:42.771026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.067 [2024-11-20 14:55:42.771043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.067 [2024-11-20 14:55:42.783016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.067 [2024-11-20 14:55:42.783029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.067 [2024-11-20 14:55:42.795021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.067 [2024-11-20 14:55:42.795035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.067 [2024-11-20 14:55:42.807017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.067 [2024-11-20 14:55:42.807031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.067 [2024-11-20 14:55:42.819016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.067 [2024-11-20 14:55:42.819029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.067 [2024-11-20 14:55:42.831033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.067 [2024-11-20 14:55:42.831047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.067 [2024-11-20 14:55:42.843027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.067 [2024-11-20 14:55:42.843039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.067 [2024-11-20 14:55:42.855017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.067 [2024-11-20 14:55:42.855030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.067 [2024-11-20 14:55:42.867018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.067 [2024-11-20 14:55:42.867029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.067 [2024-11-20 14:55:42.879017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.067 [2024-11-20 14:55:42.879031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.067 [2024-11-20 14:55:42.891013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:31.067 [2024-11-20 14:55:42.891022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:31.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1812988) - No such process 00:36:31.067 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1812988 00:36:31.068 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.068 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.068 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:31.068 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.068 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:31.068 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.068 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:31.068 delay0 00:36:31.068 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.068 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:36:31.068 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.068 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:31.068 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.068 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:36:31.327 [2024-11-20 14:55:43.045781] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:39.453 Initializing NVMe Controllers 00:36:39.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:39.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:39.453 Initialization complete. Launching workers. 00:36:39.453 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 244, failed: 27883 00:36:39.453 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 27996, failed to submit 131 00:36:39.453 success 27902, unsuccessful 94, failed 0 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:39.453 rmmod nvme_tcp 00:36:39.453 rmmod nvme_fabrics 00:36:39.453 rmmod nvme_keyring 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1811344 ']' 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1811344 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1811344 ']' 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1811344 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1811344 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1811344' 00:36:39.453 killing process with pid 1811344 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1811344 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1811344 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:39.453 14:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.832 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:40.832 00:36:40.832 real 0m32.372s 00:36:40.832 user 0m41.371s 00:36:40.832 sys 0m13.519s 00:36:40.832 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:40.832 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:40.832 ************************************ 00:36:40.832 END TEST nvmf_zcopy 00:36:40.832 ************************************ 00:36:40.832 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:40.832 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:40.832 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:40.832 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:40.832 ************************************ 00:36:40.832 START TEST nvmf_nmic 00:36:40.832 ************************************ 00:36:40.832 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:41.093 * Looking for test storage... 00:36:41.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:41.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.093 --rc genhtml_branch_coverage=1 00:36:41.093 --rc genhtml_function_coverage=1 00:36:41.093 --rc genhtml_legend=1 00:36:41.093 --rc geninfo_all_blocks=1 00:36:41.093 --rc geninfo_unexecuted_blocks=1 00:36:41.093 00:36:41.093 ' 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:41.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.093 --rc genhtml_branch_coverage=1 00:36:41.093 --rc genhtml_function_coverage=1 00:36:41.093 --rc genhtml_legend=1 00:36:41.093 --rc geninfo_all_blocks=1 00:36:41.093 --rc geninfo_unexecuted_blocks=1 00:36:41.093 00:36:41.093 ' 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:41.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.093 --rc genhtml_branch_coverage=1 00:36:41.093 --rc genhtml_function_coverage=1 00:36:41.093 --rc genhtml_legend=1 00:36:41.093 --rc geninfo_all_blocks=1 00:36:41.093 --rc geninfo_unexecuted_blocks=1 00:36:41.093 00:36:41.093 ' 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:41.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.093 --rc genhtml_branch_coverage=1 00:36:41.093 --rc genhtml_function_coverage=1 00:36:41.093 --rc genhtml_legend=1 00:36:41.093 --rc geninfo_all_blocks=1 00:36:41.093 --rc geninfo_unexecuted_blocks=1 00:36:41.093 00:36:41.093 ' 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:41.093 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:36:41.094 14:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.668 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:47.668 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:36:47.668 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:47.668 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:47.669 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:47.669 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:47.669 Found net devices under 0000:86:00.0: cvl_0_0 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:47.669 Found net devices under 0000:86:00.1: cvl_0_1 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:47.669 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:47.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:47.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:36:47.670 00:36:47.670 --- 10.0.0.2 ping statistics --- 00:36:47.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:47.670 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:47.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:47.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:36:47.670 00:36:47.670 --- 10.0.0.1 ping statistics --- 00:36:47.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:47.670 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1818502 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1818502 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1818502 ']' 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:47.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:47.670 14:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.670 [2024-11-20 14:55:58.854952] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:47.670 [2024-11-20 14:55:58.855971] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:36:47.670 [2024-11-20 14:55:58.856012] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:47.670 [2024-11-20 14:55:58.935728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:47.670 [2024-11-20 14:55:58.980070] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:47.670 [2024-11-20 14:55:58.980112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:47.670 [2024-11-20 14:55:58.980119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:47.670 [2024-11-20 14:55:58.980125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:47.670 [2024-11-20 14:55:58.980130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:47.670 [2024-11-20 14:55:58.981595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:47.670 [2024-11-20 14:55:58.981706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:47.670 [2024-11-20 14:55:58.981811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:47.670 [2024-11-20 14:55:58.981812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:47.670 [2024-11-20 14:55:59.051388] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:47.670 [2024-11-20 14:55:59.051991] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:47.670 [2024-11-20 14:55:59.052390] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:47.670 [2024-11-20 14:55:59.052811] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:47.670 [2024-11-20 14:55:59.052850] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:47.670 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:47.670 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:36:47.670 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:47.670 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:47.670 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.670 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:47.670 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:47.670 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.670 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.670 [2024-11-20 14:55:59.118605] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:47.670 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.670 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.671 Malloc0 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.671 [2024-11-20 14:55:59.198744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:36:47.671 test case1: single bdev can't be used in multiple subsystems 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.671 [2024-11-20 14:55:59.230295] bdev.c:8526:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:36:47.671 [2024-11-20 14:55:59.230317] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:36:47.671 [2024-11-20 14:55:59.230325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:47.671 request: 00:36:47.671 { 00:36:47.671 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:36:47.671 "namespace": { 00:36:47.671 "bdev_name": "Malloc0", 00:36:47.671 "no_auto_visible": false, 00:36:47.671 "hide_metadata": false 00:36:47.671 }, 00:36:47.671 "method": "nvmf_subsystem_add_ns", 00:36:47.671 "req_id": 1 00:36:47.671 } 00:36:47.671 Got JSON-RPC error response 00:36:47.671 response: 00:36:47.671 { 00:36:47.671 "code": -32602, 00:36:47.671 "message": "Invalid parameters" 00:36:47.671 } 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:36:47.671 Adding namespace failed - expected result. 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:36:47.671 test case2: host connect to nvmf target in multiple paths 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:47.671 [2024-11-20 14:55:59.242383] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:47.671 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:36:47.930 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:36:47.930 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:36:47.930 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:47.930 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:47.930 14:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:36:50.464 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:50.464 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:50.464 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:50.464 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:50.464 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:50.464 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:36:50.464 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:50.464 [global] 00:36:50.464 thread=1 00:36:50.464 invalidate=1 00:36:50.464 rw=write 00:36:50.464 time_based=1 00:36:50.464 runtime=1 00:36:50.464 ioengine=libaio 00:36:50.464 direct=1 00:36:50.464 bs=4096 00:36:50.464 iodepth=1 00:36:50.464 norandommap=0 00:36:50.464 numjobs=1 00:36:50.464 00:36:50.464 verify_dump=1 00:36:50.464 verify_backlog=512 00:36:50.464 verify_state_save=0 00:36:50.464 do_verify=1 00:36:50.464 verify=crc32c-intel 00:36:50.464 [job0] 00:36:50.464 filename=/dev/nvme0n1 00:36:50.464 Could not set queue depth (nvme0n1) 00:36:50.464 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:50.464 fio-3.35 00:36:50.464 Starting 1 thread 00:36:51.401 00:36:51.401 job0: (groupid=0, jobs=1): err= 0: pid=1819115: Wed Nov 20 14:56:03 2024 00:36:51.401 read: IOPS=22, BW=89.9KiB/s (92.1kB/s)(92.0KiB/1023msec) 00:36:51.401 slat (nsec): min=9784, max=22301, avg=21101.09, stdev=2480.85 00:36:51.401 clat (usec): min=40837, max=41110, avg=40975.01, stdev=66.48 00:36:51.401 lat (usec): min=40858, max=41131, avg=40996.11, stdev=65.58 00:36:51.401 clat percentiles (usec): 00:36:51.401 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:36:51.401 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:51.401 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:51.401 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:36:51.401 | 99.99th=[41157] 00:36:51.401 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:36:51.401 slat (nsec): min=9933, max=40370, avg=11306.97, stdev=2208.45 00:36:51.401 clat (usec): min=133, max=323, avg=142.38, stdev=13.23 00:36:51.401 lat (usec): min=143, max=359, avg=153.68, stdev=14.78 00:36:51.401 clat percentiles (usec): 00:36:51.401 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 137], 20.00th=[ 139], 00:36:51.401 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 141], 60.00th=[ 143], 00:36:51.401 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 147], 95.00th=[ 151], 00:36:51.401 | 99.00th=[ 169], 99.50th=[ 229], 99.90th=[ 326], 99.95th=[ 326], 00:36:51.401 | 99.99th=[ 326] 00:36:51.401 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:36:51.401 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:51.401 lat (usec) : 250=95.33%, 500=0.37% 00:36:51.401 lat (msec) : 50=4.30% 00:36:51.401 cpu : usr=0.39%, sys=0.98%, ctx=535, majf=0, minf=1 00:36:51.401 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.401 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.401 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:51.401 00:36:51.401 Run status group 0 (all jobs): 00:36:51.401 READ: bw=89.9KiB/s (92.1kB/s), 89.9KiB/s-89.9KiB/s (92.1kB/s-92.1kB/s), io=92.0KiB (94.2kB), run=1023-1023msec 00:36:51.401 WRITE: bw=2002KiB/s (2050kB/s), 2002KiB/s-2002KiB/s (2050kB/s-2050kB/s), io=2048KiB (2097kB), run=1023-1023msec 00:36:51.401 00:36:51.401 Disk stats (read/write): 00:36:51.401 nvme0n1: ios=69/512, merge=0/0, ticks=998/72, in_queue=1070, util=95.39% 00:36:51.401 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:51.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:36:51.660 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:51.660 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:36:51.660 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:51.660 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:51.661 rmmod nvme_tcp 00:36:51.661 rmmod nvme_fabrics 00:36:51.661 rmmod nvme_keyring 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1818502 ']' 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1818502 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1818502 ']' 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1818502 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:51.661 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1818502 00:36:51.920 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:51.920 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:51.920 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1818502' 00:36:51.920 killing process with pid 1818502 00:36:51.920 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1818502 00:36:51.920 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1818502 00:36:51.920 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:51.920 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:51.920 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:51.920 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:36:51.920 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:36:51.920 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:51.920 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:36:51.920 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:51.920 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:51.920 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:51.920 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:51.920 14:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:54.456 14:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:54.456 00:36:54.456 real 0m13.140s 00:36:54.456 user 0m24.183s 00:36:54.456 sys 0m6.024s 00:36:54.456 14:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:54.456 14:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:54.456 ************************************ 00:36:54.456 END TEST nvmf_nmic 00:36:54.456 ************************************ 00:36:54.456 14:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:54.456 14:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:54.456 14:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:54.456 14:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:54.456 ************************************ 00:36:54.456 START TEST nvmf_fio_target 00:36:54.456 ************************************ 00:36:54.456 14:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:54.456 * Looking for test storage... 00:36:54.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:54.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.456 --rc genhtml_branch_coverage=1 00:36:54.456 --rc genhtml_function_coverage=1 00:36:54.456 --rc genhtml_legend=1 00:36:54.456 --rc geninfo_all_blocks=1 00:36:54.456 --rc geninfo_unexecuted_blocks=1 00:36:54.456 00:36:54.456 ' 00:36:54.456 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:54.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.456 --rc genhtml_branch_coverage=1 00:36:54.457 --rc genhtml_function_coverage=1 00:36:54.457 --rc genhtml_legend=1 00:36:54.457 --rc geninfo_all_blocks=1 00:36:54.457 --rc geninfo_unexecuted_blocks=1 00:36:54.457 00:36:54.457 ' 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:54.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.457 --rc genhtml_branch_coverage=1 00:36:54.457 --rc genhtml_function_coverage=1 00:36:54.457 --rc genhtml_legend=1 00:36:54.457 --rc geninfo_all_blocks=1 00:36:54.457 --rc geninfo_unexecuted_blocks=1 00:36:54.457 00:36:54.457 ' 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:54.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.457 --rc genhtml_branch_coverage=1 00:36:54.457 --rc genhtml_function_coverage=1 00:36:54.457 --rc genhtml_legend=1 00:36:54.457 --rc geninfo_all_blocks=1 00:36:54.457 --rc geninfo_unexecuted_blocks=1 00:36:54.457 00:36:54.457 ' 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:36:54.457 14:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:01.030 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:01.030 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:01.030 Found net devices under 0000:86:00.0: cvl_0_0 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:01.030 Found net devices under 0000:86:00.1: cvl_0_1 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:01.030 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:01.031 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:01.031 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:01.031 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:01.031 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:01.031 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:01.031 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:01.031 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:01.031 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:01.031 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:01.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:01.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:37:01.031 00:37:01.031 --- 10.0.0.2 ping statistics --- 00:37:01.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.031 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:01.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:01.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:37:01.031 00:37:01.031 --- 10.0.0.1 ping statistics --- 00:37:01.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.031 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1822839 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1822839 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1822839 ']' 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:01.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:01.031 [2024-11-20 14:56:12.115929] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:01.031 [2024-11-20 14:56:12.116869] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:37:01.031 [2024-11-20 14:56:12.116902] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:01.031 [2024-11-20 14:56:12.178650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:01.031 [2024-11-20 14:56:12.222386] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:01.031 [2024-11-20 14:56:12.222423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:01.031 [2024-11-20 14:56:12.222431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:01.031 [2024-11-20 14:56:12.222437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:01.031 [2024-11-20 14:56:12.222442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:01.031 [2024-11-20 14:56:12.223920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:01.031 [2024-11-20 14:56:12.224066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:01.031 [2024-11-20 14:56:12.224066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:01.031 [2024-11-20 14:56:12.224033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:01.031 [2024-11-20 14:56:12.293642] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:01.031 [2024-11-20 14:56:12.294678] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:01.031 [2024-11-20 14:56:12.294758] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:01.031 [2024-11-20 14:56:12.295144] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:01.031 [2024-11-20 14:56:12.295180] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:01.031 [2024-11-20 14:56:12.532906] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:37:01.031 14:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:01.290 14:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:37:01.290 14:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:01.290 14:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:37:01.290 14:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:01.549 14:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:37:01.549 14:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:37:01.808 14:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:02.067 14:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:37:02.067 14:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:02.067 14:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:37:02.067 14:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:02.325 14:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:37:02.325 14:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:37:02.583 14:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:02.842 14:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:37:02.842 14:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:02.842 14:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:37:02.842 14:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:03.100 14:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:03.358 [2024-11-20 14:56:15.112841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:03.358 14:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:37:03.617 14:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:37:03.617 14:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:03.875 14:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:37:03.875 14:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:37:03.875 14:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:37:03.875 14:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:37:03.875 14:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:37:03.875 14:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:37:06.404 14:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:37:06.404 14:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:37:06.404 14:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:37:06.404 14:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:37:06.404 14:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:37:06.404 14:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:37:06.404 14:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:37:06.404 [global] 00:37:06.404 thread=1 00:37:06.404 invalidate=1 00:37:06.404 rw=write 00:37:06.404 time_based=1 00:37:06.404 runtime=1 00:37:06.404 ioengine=libaio 00:37:06.404 direct=1 00:37:06.404 bs=4096 00:37:06.404 iodepth=1 00:37:06.404 norandommap=0 00:37:06.404 numjobs=1 00:37:06.404 00:37:06.404 verify_dump=1 00:37:06.404 verify_backlog=512 00:37:06.404 verify_state_save=0 00:37:06.404 do_verify=1 00:37:06.404 verify=crc32c-intel 00:37:06.404 [job0] 00:37:06.404 filename=/dev/nvme0n1 00:37:06.404 [job1] 00:37:06.404 filename=/dev/nvme0n2 00:37:06.404 [job2] 00:37:06.404 filename=/dev/nvme0n3 00:37:06.404 [job3] 00:37:06.404 filename=/dev/nvme0n4 00:37:06.404 Could not set queue depth (nvme0n1) 00:37:06.404 Could not set queue depth (nvme0n2) 00:37:06.404 Could not set queue depth (nvme0n3) 00:37:06.404 Could not set queue depth (nvme0n4) 00:37:06.404 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:06.404 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:06.404 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:06.404 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:06.404 fio-3.35 00:37:06.404 Starting 4 threads 00:37:07.778 00:37:07.778 job0: (groupid=0, jobs=1): err= 0: pid=1823939: Wed Nov 20 14:56:19 2024 00:37:07.778 read: IOPS=1209, BW=4839KiB/s (4955kB/s)(4844KiB/1001msec) 00:37:07.778 slat (nsec): min=6317, max=30152, avg=7357.68, stdev=1790.92 00:37:07.778 clat (usec): min=173, max=41091, avg=607.62, stdev=4035.78 00:37:07.778 lat (usec): min=186, max=41112, avg=614.98, stdev=4037.15 00:37:07.778 clat percentiles (usec): 00:37:07.778 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 190], 00:37:07.778 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 192], 60.00th=[ 194], 00:37:07.778 | 70.00th=[ 196], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 258], 00:37:07.778 | 99.00th=[ 388], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:37:07.778 | 99.99th=[41157] 00:37:07.778 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:37:07.778 slat (nsec): min=9185, max=40559, avg=10894.31, stdev=2023.46 00:37:07.778 clat (usec): min=124, max=335, avg=151.29, stdev=27.68 00:37:07.778 lat (usec): min=135, max=376, avg=162.18, stdev=28.85 00:37:07.778 clat percentiles (usec): 00:37:07.778 | 1.00th=[ 128], 5.00th=[ 130], 10.00th=[ 131], 20.00th=[ 133], 00:37:07.778 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:37:07.778 | 70.00th=[ 163], 80.00th=[ 182], 90.00th=[ 194], 95.00th=[ 200], 00:37:07.778 | 99.00th=[ 239], 99.50th=[ 243], 99.90th=[ 258], 99.95th=[ 338], 00:37:07.778 | 99.99th=[ 338] 00:37:07.778 bw ( KiB/s): min= 4096, max= 4096, per=33.67%, avg=4096.00, stdev= 0.00, samples=1 00:37:07.778 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:07.778 lat (usec) : 250=91.92%, 500=7.64% 00:37:07.778 lat (msec) : 50=0.44% 00:37:07.778 cpu : usr=1.80%, sys=2.50%, ctx=2747, majf=0, minf=1 00:37:07.778 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:07.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:07.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:07.778 issued rwts: total=1211,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:07.778 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:07.778 job1: (groupid=0, jobs=1): err= 0: pid=1823940: Wed Nov 20 14:56:19 2024 00:37:07.778 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:37:07.778 slat (nsec): min=10016, max=23648, avg=17090.00, stdev=5666.96 00:37:07.778 clat (usec): min=40663, max=41160, avg=40960.53, stdev=99.02 00:37:07.778 lat (usec): min=40673, max=41170, avg=40977.62, stdev=99.18 00:37:07.778 clat percentiles (usec): 00:37:07.778 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:37:07.778 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:07.778 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:07.778 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:37:07.778 | 99.99th=[41157] 00:37:07.778 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:37:07.778 slat (nsec): min=11116, max=53663, avg=13795.50, stdev=4125.22 00:37:07.778 clat (usec): min=128, max=302, avg=186.41, stdev=21.17 00:37:07.778 lat (usec): min=140, max=336, avg=200.20, stdev=22.34 00:37:07.778 clat percentiles (usec): 00:37:07.778 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 159], 20.00th=[ 178], 00:37:07.778 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 192], 00:37:07.778 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 217], 00:37:07.778 | 99.00th=[ 237], 99.50th=[ 253], 99.90th=[ 302], 99.95th=[ 302], 00:37:07.778 | 99.99th=[ 302] 00:37:07.778 bw ( KiB/s): min= 4096, max= 4096, per=33.67%, avg=4096.00, stdev= 0.00, samples=1 00:37:07.778 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:07.778 lat (usec) : 250=95.13%, 500=0.75% 00:37:07.778 lat (msec) : 50=4.12% 00:37:07.778 cpu : usr=0.50%, sys=0.69%, ctx=537, majf=0, minf=1 00:37:07.778 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:07.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:07.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:07.778 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:07.778 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:07.778 job2: (groupid=0, jobs=1): err= 0: pid=1823941: Wed Nov 20 14:56:19 2024 00:37:07.778 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:37:07.778 slat (nsec): min=11279, max=27220, avg=21711.18, stdev=2803.42 00:37:07.778 clat (usec): min=40659, max=41079, avg=40955.03, stdev=83.21 00:37:07.778 lat (usec): min=40670, max=41100, avg=40976.74, stdev=84.74 00:37:07.778 clat percentiles (usec): 00:37:07.778 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:07.778 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:07.778 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:07.778 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:37:07.778 | 99.99th=[41157] 00:37:07.778 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:37:07.778 slat (nsec): min=12318, max=42624, avg=13822.13, stdev=2187.09 00:37:07.778 clat (usec): min=153, max=234, avg=178.06, stdev=11.83 00:37:07.778 lat (usec): min=166, max=270, avg=191.88, stdev=12.53 00:37:07.778 clat percentiles (usec): 00:37:07.778 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:37:07.778 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:37:07.778 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:37:07.778 | 99.00th=[ 221], 99.50th=[ 233], 99.90th=[ 235], 99.95th=[ 235], 00:37:07.778 | 99.99th=[ 235] 00:37:07.778 bw ( KiB/s): min= 4096, max= 4096, per=33.67%, avg=4096.00, stdev= 0.00, samples=1 00:37:07.778 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:07.778 lat (usec) : 250=95.88% 00:37:07.778 lat (msec) : 50=4.12% 00:37:07.778 cpu : usr=0.40%, sys=1.00%, ctx=534, majf=0, minf=2 00:37:07.778 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:07.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:07.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:07.778 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:07.778 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:07.778 job3: (groupid=0, jobs=1): err= 0: pid=1823942: Wed Nov 20 14:56:19 2024 00:37:07.778 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:37:07.778 slat (nsec): min=10025, max=24166, avg=21950.32, stdev=2966.23 00:37:07.778 clat (usec): min=40588, max=41130, avg=40945.99, stdev=106.51 00:37:07.778 lat (usec): min=40598, max=41147, avg=40967.94, stdev=108.06 00:37:07.778 clat percentiles (usec): 00:37:07.778 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:37:07.778 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:07.778 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:07.778 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:37:07.778 | 99.99th=[41157] 00:37:07.778 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:37:07.778 slat (nsec): min=10597, max=36167, avg=12262.39, stdev=2237.06 00:37:07.778 clat (usec): min=157, max=344, avg=180.15, stdev=12.74 00:37:07.778 lat (usec): min=168, max=380, avg=192.41, stdev=13.60 00:37:07.778 clat percentiles (usec): 00:37:07.778 | 1.00th=[ 163], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 172], 00:37:07.778 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 182], 00:37:07.778 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 198], 00:37:07.778 | 99.00th=[ 208], 99.50th=[ 223], 99.90th=[ 347], 99.95th=[ 347], 00:37:07.778 | 99.99th=[ 347] 00:37:07.778 bw ( KiB/s): min= 4096, max= 4096, per=33.67%, avg=4096.00, stdev= 0.00, samples=1 00:37:07.778 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:07.778 lat (usec) : 250=95.69%, 500=0.19% 00:37:07.778 lat (msec) : 50=4.12% 00:37:07.778 cpu : usr=0.50%, sys=0.80%, ctx=534, majf=0, minf=2 00:37:07.778 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:07.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:07.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:07.778 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:07.778 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:07.778 00:37:07.778 Run status group 0 (all jobs): 00:37:07.779 READ: bw=5057KiB/s (5179kB/s), 87.1KiB/s-4839KiB/s (89.2kB/s-4955kB/s), io=5108KiB (5231kB), run=1001-1010msec 00:37:07.779 WRITE: bw=11.9MiB/s (12.5MB/s), 2028KiB/s-6138KiB/s (2076kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1010msec 00:37:07.779 00:37:07.779 Disk stats (read/write): 00:37:07.779 nvme0n1: ios=914/1024, merge=0/0, ticks=693/156, in_queue=849, util=86.57% 00:37:07.779 nvme0n2: ios=42/512, merge=0/0, ticks=1724/91, in_queue=1815, util=97.87% 00:37:07.779 nvme0n3: ios=18/512, merge=0/0, ticks=738/90, in_queue=828, util=88.95% 00:37:07.779 nvme0n4: ios=18/512, merge=0/0, ticks=738/81, in_queue=819, util=89.70% 00:37:07.779 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:37:07.779 [global] 00:37:07.779 thread=1 00:37:07.779 invalidate=1 00:37:07.779 rw=randwrite 00:37:07.779 time_based=1 00:37:07.779 runtime=1 00:37:07.779 ioengine=libaio 00:37:07.779 direct=1 00:37:07.779 bs=4096 00:37:07.779 iodepth=1 00:37:07.779 norandommap=0 00:37:07.779 numjobs=1 00:37:07.779 00:37:07.779 verify_dump=1 00:37:07.779 verify_backlog=512 00:37:07.779 verify_state_save=0 00:37:07.779 do_verify=1 00:37:07.779 verify=crc32c-intel 00:37:07.779 [job0] 00:37:07.779 filename=/dev/nvme0n1 00:37:07.779 [job1] 00:37:07.779 filename=/dev/nvme0n2 00:37:07.779 [job2] 00:37:07.779 filename=/dev/nvme0n3 00:37:07.779 [job3] 00:37:07.779 filename=/dev/nvme0n4 00:37:07.779 Could not set queue depth (nvme0n1) 00:37:07.779 Could not set queue depth (nvme0n2) 00:37:07.779 Could not set queue depth (nvme0n3) 00:37:07.779 Could not set queue depth (nvme0n4) 00:37:07.779 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:07.779 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:07.779 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:07.779 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:07.779 fio-3.35 00:37:07.779 Starting 4 threads 00:37:09.197 00:37:09.197 job0: (groupid=0, jobs=1): err= 0: pid=1824307: Wed Nov 20 14:56:20 2024 00:37:09.197 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:37:09.197 slat (nsec): min=9208, max=25662, avg=22208.77, stdev=3098.85 00:37:09.197 clat (usec): min=40656, max=41093, avg=40959.15, stdev=80.48 00:37:09.197 lat (usec): min=40665, max=41117, avg=40981.35, stdev=82.86 00:37:09.197 clat percentiles (usec): 00:37:09.197 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:37:09.197 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:09.197 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:09.197 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:37:09.197 | 99.99th=[41157] 00:37:09.197 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:37:09.197 slat (nsec): min=9668, max=38889, avg=10906.27, stdev=2155.14 00:37:09.197 clat (usec): min=151, max=311, avg=189.60, stdev=17.93 00:37:09.197 lat (usec): min=161, max=350, avg=200.51, stdev=18.42 00:37:09.197 clat percentiles (usec): 00:37:09.197 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 176], 00:37:09.197 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 194], 00:37:09.197 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 212], 00:37:09.197 | 99.00th=[ 249], 99.50th=[ 273], 99.90th=[ 310], 99.95th=[ 310], 00:37:09.197 | 99.99th=[ 310] 00:37:09.197 bw ( KiB/s): min= 4096, max= 4096, per=17.05%, avg=4096.00, stdev= 0.00, samples=1 00:37:09.197 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:09.197 lat (usec) : 250=94.94%, 500=0.94% 00:37:09.197 lat (msec) : 50=4.12% 00:37:09.197 cpu : usr=0.40%, sys=0.90%, ctx=534, majf=0, minf=1 00:37:09.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.197 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:09.197 job1: (groupid=0, jobs=1): err= 0: pid=1824308: Wed Nov 20 14:56:20 2024 00:37:09.197 read: IOPS=23, BW=93.8KiB/s (96.1kB/s)(96.0KiB/1023msec) 00:37:09.197 slat (nsec): min=8594, max=32663, avg=23199.75, stdev=4962.71 00:37:09.197 clat (usec): min=425, max=42769, avg=38035.68, stdev=10337.84 00:37:09.197 lat (usec): min=458, max=42795, avg=38058.87, stdev=10336.17 00:37:09.197 clat percentiles (usec): 00:37:09.197 | 1.00th=[ 424], 5.00th=[ 9110], 10.00th=[40633], 20.00th=[41157], 00:37:09.197 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:09.197 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:09.197 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:37:09.198 | 99.99th=[42730] 00:37:09.198 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:37:09.198 slat (nsec): min=11129, max=55362, avg=12303.10, stdev=2407.95 00:37:09.198 clat (usec): min=130, max=359, avg=197.79, stdev=14.17 00:37:09.198 lat (usec): min=158, max=395, avg=210.10, stdev=14.36 00:37:09.198 clat percentiles (usec): 00:37:09.198 | 1.00th=[ 165], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 190], 00:37:09.198 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 200], 00:37:09.198 | 70.00th=[ 204], 80.00th=[ 206], 90.00th=[ 212], 95.00th=[ 215], 00:37:09.198 | 99.00th=[ 231], 99.50th=[ 237], 99.90th=[ 359], 99.95th=[ 359], 00:37:09.198 | 99.99th=[ 359] 00:37:09.198 bw ( KiB/s): min= 4096, max= 4096, per=17.05%, avg=4096.00, stdev= 0.00, samples=1 00:37:09.198 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:09.198 lat (usec) : 250=95.15%, 500=0.56% 00:37:09.198 lat (msec) : 10=0.19%, 50=4.10% 00:37:09.198 cpu : usr=0.00%, sys=0.88%, ctx=537, majf=0, minf=1 00:37:09.198 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.198 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.198 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:09.198 job2: (groupid=0, jobs=1): err= 0: pid=1824309: Wed Nov 20 14:56:20 2024 00:37:09.198 read: IOPS=2070, BW=8284KiB/s (8483kB/s)(8292KiB/1001msec) 00:37:09.198 slat (nsec): min=6632, max=42128, avg=8669.76, stdev=1551.86 00:37:09.198 clat (usec): min=180, max=1478, avg=249.24, stdev=53.27 00:37:09.198 lat (usec): min=189, max=1487, avg=257.91, stdev=53.37 00:37:09.198 clat percentiles (usec): 00:37:09.198 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 212], 00:37:09.198 | 30.00th=[ 221], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 253], 00:37:09.198 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:37:09.198 | 99.00th=[ 482], 99.50th=[ 506], 99.90th=[ 586], 99.95th=[ 660], 00:37:09.198 | 99.99th=[ 1483] 00:37:09.198 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:37:09.198 slat (nsec): min=10027, max=48645, avg=12447.20, stdev=2247.29 00:37:09.198 clat (usec): min=125, max=1874, avg=164.48, stdev=66.78 00:37:09.198 lat (usec): min=137, max=1887, avg=176.93, stdev=66.98 00:37:09.198 clat percentiles (usec): 00:37:09.198 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 143], 00:37:09.198 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 157], 00:37:09.198 | 70.00th=[ 169], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 208], 00:37:09.198 | 99.00th=[ 255], 99.50th=[ 285], 99.90th=[ 1647], 99.95th=[ 1827], 00:37:09.198 | 99.99th=[ 1876] 00:37:09.198 bw ( KiB/s): min= 9384, max= 9384, per=39.06%, avg=9384.00, stdev= 0.00, samples=1 00:37:09.198 iops : min= 2346, max= 2346, avg=2346.00, stdev= 0.00, samples=1 00:37:09.198 lat (usec) : 250=79.78%, 500=19.86%, 750=0.26% 00:37:09.198 lat (msec) : 2=0.11% 00:37:09.198 cpu : usr=3.20%, sys=4.50%, ctx=4634, majf=0, minf=1 00:37:09.198 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.198 issued rwts: total=2073,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.198 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:09.198 job3: (groupid=0, jobs=1): err= 0: pid=1824310: Wed Nov 20 14:56:20 2024 00:37:09.198 read: IOPS=2147, BW=8591KiB/s (8798kB/s)(8600KiB/1001msec) 00:37:09.198 slat (nsec): min=7337, max=44376, avg=8550.47, stdev=1273.74 00:37:09.198 clat (usec): min=166, max=520, avg=242.61, stdev=47.86 00:37:09.198 lat (usec): min=189, max=529, avg=251.16, stdev=47.92 00:37:09.198 clat percentiles (usec): 00:37:09.198 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 212], 00:37:09.198 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 243], 00:37:09.198 | 70.00th=[ 249], 80.00th=[ 269], 90.00th=[ 289], 95.00th=[ 306], 00:37:09.198 | 99.00th=[ 469], 99.50th=[ 494], 99.90th=[ 519], 99.95th=[ 519], 00:37:09.198 | 99.99th=[ 523] 00:37:09.198 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:37:09.198 slat (nsec): min=10489, max=42874, avg=11820.94, stdev=1908.52 00:37:09.198 clat (usec): min=127, max=467, avg=162.54, stdev=24.78 00:37:09.198 lat (usec): min=139, max=477, avg=174.36, stdev=24.93 00:37:09.198 clat percentiles (usec): 00:37:09.198 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 145], 00:37:09.198 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 157], 00:37:09.198 | 70.00th=[ 165], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 208], 00:37:09.198 | 99.00th=[ 219], 99.50th=[ 227], 99.90th=[ 306], 99.95th=[ 396], 00:37:09.198 | 99.99th=[ 469] 00:37:09.198 bw ( KiB/s): min=10848, max=10848, per=45.16%, avg=10848.00, stdev= 0.00, samples=1 00:37:09.198 iops : min= 2712, max= 2712, avg=2712.00, stdev= 0.00, samples=1 00:37:09.198 lat (usec) : 250=86.82%, 500=13.06%, 750=0.13% 00:37:09.198 cpu : usr=3.70%, sys=7.80%, ctx=4711, majf=0, minf=1 00:37:09.198 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.198 issued rwts: total=2150,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.198 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:09.198 00:37:09.198 Run status group 0 (all jobs): 00:37:09.198 READ: bw=16.3MiB/s (17.1MB/s), 87.5KiB/s-8591KiB/s (89.6kB/s-8798kB/s), io=16.7MiB (17.5MB), run=1001-1023msec 00:37:09.198 WRITE: bw=23.5MiB/s (24.6MB/s), 2002KiB/s-9.99MiB/s (2050kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1023msec 00:37:09.198 00:37:09.198 Disk stats (read/write): 00:37:09.198 nvme0n1: ios=68/512, merge=0/0, ticks=754/90, in_queue=844, util=87.17% 00:37:09.198 nvme0n2: ios=69/512, merge=0/0, ticks=1789/94, in_queue=1883, util=98.38% 00:37:09.198 nvme0n3: ios=1887/2048, merge=0/0, ticks=1334/331, in_queue=1665, util=98.65% 00:37:09.198 nvme0n4: ios=1984/2048, merge=0/0, ticks=890/322, in_queue=1212, util=98.43% 00:37:09.198 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:37:09.198 [global] 00:37:09.198 thread=1 00:37:09.198 invalidate=1 00:37:09.198 rw=write 00:37:09.198 time_based=1 00:37:09.198 runtime=1 00:37:09.198 ioengine=libaio 00:37:09.198 direct=1 00:37:09.198 bs=4096 00:37:09.198 iodepth=128 00:37:09.198 norandommap=0 00:37:09.198 numjobs=1 00:37:09.198 00:37:09.198 verify_dump=1 00:37:09.198 verify_backlog=512 00:37:09.198 verify_state_save=0 00:37:09.198 do_verify=1 00:37:09.198 verify=crc32c-intel 00:37:09.198 [job0] 00:37:09.198 filename=/dev/nvme0n1 00:37:09.198 [job1] 00:37:09.198 filename=/dev/nvme0n2 00:37:09.198 [job2] 00:37:09.198 filename=/dev/nvme0n3 00:37:09.198 [job3] 00:37:09.198 filename=/dev/nvme0n4 00:37:09.198 Could not set queue depth (nvme0n1) 00:37:09.198 Could not set queue depth (nvme0n2) 00:37:09.198 Could not set queue depth (nvme0n3) 00:37:09.198 Could not set queue depth (nvme0n4) 00:37:09.569 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:09.569 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:09.569 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:09.569 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:09.569 fio-3.35 00:37:09.569 Starting 4 threads 00:37:10.576 00:37:10.576 job0: (groupid=0, jobs=1): err= 0: pid=1824677: Wed Nov 20 14:56:22 2024 00:37:10.576 read: IOPS=5423, BW=21.2MiB/s (22.2MB/s)(21.3MiB/1007msec) 00:37:10.576 slat (nsec): min=1199, max=18346k, avg=88801.72, stdev=707605.14 00:37:10.576 clat (usec): min=5744, max=32997, avg=11932.77, stdev=3483.55 00:37:10.576 lat (usec): min=5755, max=35194, avg=12021.57, stdev=3536.94 00:37:10.576 clat percentiles (usec): 00:37:10.576 | 1.00th=[ 6521], 5.00th=[ 7570], 10.00th=[ 8455], 20.00th=[ 9110], 00:37:10.576 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[11076], 60.00th=[11863], 00:37:10.576 | 70.00th=[13173], 80.00th=[14877], 90.00th=[16909], 95.00th=[18744], 00:37:10.576 | 99.00th=[21365], 99.50th=[21890], 99.90th=[24249], 99.95th=[24249], 00:37:10.576 | 99.99th=[32900] 00:37:10.576 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:37:10.576 slat (usec): min=2, max=18955, avg=77.92, stdev=557.73 00:37:10.576 clat (usec): min=2218, max=38959, avg=11064.91, stdev=4543.50 00:37:10.576 lat (usec): min=2233, max=38980, avg=11142.82, stdev=4566.76 00:37:10.576 clat percentiles (usec): 00:37:10.576 | 1.00th=[ 3687], 5.00th=[ 5800], 10.00th=[ 6456], 20.00th=[ 8455], 00:37:10.576 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10552], 60.00th=[10814], 00:37:10.576 | 70.00th=[11600], 80.00th=[12518], 90.00th=[14877], 95.00th=[20579], 00:37:10.576 | 99.00th=[31065], 99.50th=[37487], 99.90th=[39060], 99.95th=[39060], 00:37:10.576 | 99.99th=[39060] 00:37:10.576 bw ( KiB/s): min=20584, max=24472, per=32.44%, avg=22528.00, stdev=2749.23, samples=2 00:37:10.576 iops : min= 5146, max= 6118, avg=5632.00, stdev=687.31, samples=2 00:37:10.576 lat (msec) : 4=0.56%, 10=36.42%, 20=58.53%, 50=4.49% 00:37:10.576 cpu : usr=4.57%, sys=6.76%, ctx=511, majf=0, minf=1 00:37:10.576 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:37:10.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:10.576 issued rwts: total=5461,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:10.576 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:10.576 job1: (groupid=0, jobs=1): err= 0: pid=1824678: Wed Nov 20 14:56:22 2024 00:37:10.576 read: IOPS=2665, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1008msec) 00:37:10.576 slat (nsec): min=1097, max=24003k, avg=194772.68, stdev=1463087.82 00:37:10.576 clat (msec): min=7, max=109, avg=27.68, stdev=18.95 00:37:10.576 lat (msec): min=7, max=111, avg=27.87, stdev=19.05 00:37:10.576 clat percentiles (msec): 00:37:10.576 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 14], 00:37:10.576 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 21], 60.00th=[ 26], 00:37:10.576 | 70.00th=[ 34], 80.00th=[ 39], 90.00th=[ 58], 95.00th=[ 62], 00:37:10.576 | 99.00th=[ 100], 99.50th=[ 107], 99.90th=[ 110], 99.95th=[ 110], 00:37:10.576 | 99.99th=[ 110] 00:37:10.576 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:37:10.576 slat (nsec): min=1837, max=18434k, avg=143954.32, stdev=1019260.93 00:37:10.576 clat (usec): min=686, max=99458, avg=17428.49, stdev=11772.62 00:37:10.576 lat (usec): min=1061, max=99465, avg=17572.45, stdev=11892.09 00:37:10.576 clat percentiles (usec): 00:37:10.576 | 1.00th=[ 3032], 5.00th=[ 7635], 10.00th=[ 8717], 20.00th=[ 9765], 00:37:10.576 | 30.00th=[11207], 40.00th=[12649], 50.00th=[14615], 60.00th=[16712], 00:37:10.576 | 70.00th=[19268], 80.00th=[22938], 90.00th=[27395], 95.00th=[32637], 00:37:10.576 | 99.00th=[74974], 99.50th=[95945], 99.90th=[99091], 99.95th=[99091], 00:37:10.576 | 99.99th=[99091] 00:37:10.576 bw ( KiB/s): min=12288, max=12288, per=17.69%, avg=12288.00, stdev= 0.00, samples=2 00:37:10.576 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:37:10.577 lat (usec) : 750=0.02% 00:37:10.577 lat (msec) : 2=0.33%, 4=0.42%, 10=16.06%, 20=42.09%, 50=34.02% 00:37:10.577 lat (msec) : 100=6.72%, 250=0.35% 00:37:10.577 cpu : usr=2.58%, sys=2.88%, ctx=234, majf=0, minf=1 00:37:10.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:37:10.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:10.577 issued rwts: total=2687,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:10.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:10.577 job2: (groupid=0, jobs=1): err= 0: pid=1824679: Wed Nov 20 14:56:22 2024 00:37:10.577 read: IOPS=4181, BW=16.3MiB/s (17.1MB/s)(16.5MiB/1010msec) 00:37:10.577 slat (nsec): min=1338, max=19981k, avg=111326.84, stdev=885752.48 00:37:10.577 clat (usec): min=2414, max=38923, avg=14574.60, stdev=4392.50 00:37:10.577 lat (usec): min=3846, max=38931, avg=14685.92, stdev=4461.37 00:37:10.577 clat percentiles (usec): 00:37:10.577 | 1.00th=[ 5342], 5.00th=[ 8586], 10.00th=[10159], 20.00th=[11731], 00:37:10.577 | 30.00th=[12649], 40.00th=[13304], 50.00th=[13829], 60.00th=[14353], 00:37:10.577 | 70.00th=[15664], 80.00th=[16909], 90.00th=[20055], 95.00th=[21627], 00:37:10.577 | 99.00th=[31851], 99.50th=[35914], 99.90th=[39060], 99.95th=[39060], 00:37:10.577 | 99.99th=[39060] 00:37:10.577 write: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec); 0 zone resets 00:37:10.577 slat (usec): min=2, max=17576, avg=103.65, stdev=839.72 00:37:10.577 clat (usec): min=455, max=56040, avg=14241.50, stdev=7415.93 00:37:10.577 lat (usec): min=469, max=56090, avg=14345.14, stdev=7488.02 00:37:10.577 clat percentiles (usec): 00:37:10.577 | 1.00th=[ 2376], 5.00th=[ 5669], 10.00th=[ 7373], 20.00th=[ 9765], 00:37:10.577 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12256], 60.00th=[13042], 00:37:10.577 | 70.00th=[14877], 80.00th=[17957], 90.00th=[22938], 95.00th=[31065], 00:37:10.577 | 99.00th=[41157], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:37:10.577 | 99.99th=[55837] 00:37:10.577 bw ( KiB/s): min=16432, max=20424, per=26.54%, avg=18428.00, stdev=2822.77, samples=2 00:37:10.577 iops : min= 4108, max= 5106, avg=4607.00, stdev=705.69, samples=2 00:37:10.577 lat (usec) : 500=0.03%, 1000=0.08% 00:37:10.577 lat (msec) : 4=1.46%, 10=14.35%, 20=70.16%, 50=13.89%, 100=0.02% 00:37:10.577 cpu : usr=3.17%, sys=5.55%, ctx=274, majf=0, minf=1 00:37:10.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:37:10.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:10.577 issued rwts: total=4223,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:10.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:10.577 job3: (groupid=0, jobs=1): err= 0: pid=1824680: Wed Nov 20 14:56:22 2024 00:37:10.577 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:37:10.577 slat (nsec): min=1437, max=15804k, avg=109555.52, stdev=746225.45 00:37:10.577 clat (usec): min=7360, max=67199, avg=14600.12, stdev=6264.93 00:37:10.577 lat (usec): min=7789, max=67203, avg=14709.67, stdev=6302.37 00:37:10.577 clat percentiles (usec): 00:37:10.577 | 1.00th=[ 8291], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10814], 00:37:10.577 | 30.00th=[11863], 40.00th=[12518], 50.00th=[13173], 60.00th=[13829], 00:37:10.577 | 70.00th=[14877], 80.00th=[15926], 90.00th=[19792], 95.00th=[27395], 00:37:10.577 | 99.00th=[42730], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:37:10.577 | 99.99th=[67634] 00:37:10.577 write: IOPS=4189, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1008msec); 0 zone resets 00:37:10.577 slat (usec): min=2, max=24317, avg=124.14, stdev=1019.06 00:37:10.577 clat (usec): min=1203, max=56794, avg=16120.03, stdev=9167.40 00:37:10.577 lat (usec): min=1267, max=74117, avg=16244.17, stdev=9270.87 00:37:10.577 clat percentiles (usec): 00:37:10.577 | 1.00th=[ 8094], 5.00th=[10028], 10.00th=[11207], 20.00th=[11600], 00:37:10.577 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12649], 60.00th=[13698], 00:37:10.577 | 70.00th=[14222], 80.00th=[16057], 90.00th=[31327], 95.00th=[38011], 00:37:10.577 | 99.00th=[56361], 99.50th=[56361], 99.90th=[56886], 99.95th=[56886], 00:37:10.577 | 99.99th=[56886] 00:37:10.577 bw ( KiB/s): min=16384, max=16384, per=23.59%, avg=16384.00, stdev= 0.00, samples=2 00:37:10.577 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:37:10.577 lat (msec) : 2=0.01%, 10=9.17%, 20=78.87%, 50=10.76%, 100=1.19% 00:37:10.577 cpu : usr=4.07%, sys=5.06%, ctx=371, majf=0, minf=1 00:37:10.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:37:10.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:10.577 issued rwts: total=4096,4223,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:10.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:10.577 00:37:10.577 Run status group 0 (all jobs): 00:37:10.577 READ: bw=63.7MiB/s (66.8MB/s), 10.4MiB/s-21.2MiB/s (10.9MB/s-22.2MB/s), io=64.3MiB (67.4MB), run=1007-1010msec 00:37:10.577 WRITE: bw=67.8MiB/s (71.1MB/s), 11.9MiB/s-21.8MiB/s (12.5MB/s-22.9MB/s), io=68.5MiB (71.8MB), run=1007-1010msec 00:37:10.577 00:37:10.577 Disk stats (read/write): 00:37:10.577 nvme0n1: ios=4642/4655, merge=0/0, ticks=54169/50920, in_queue=105089, util=99.60% 00:37:10.577 nvme0n2: ios=2233/2560, merge=0/0, ticks=22276/24810, in_queue=47086, util=86.80% 00:37:10.577 nvme0n3: ios=3642/3587, merge=0/0, ticks=47363/42889, in_queue=90252, util=98.34% 00:37:10.577 nvme0n4: ios=3584/3813, merge=0/0, ticks=25043/27735, in_queue=52778, util=89.62% 00:37:10.577 14:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:37:10.577 [global] 00:37:10.577 thread=1 00:37:10.577 invalidate=1 00:37:10.577 rw=randwrite 00:37:10.577 time_based=1 00:37:10.577 runtime=1 00:37:10.577 ioengine=libaio 00:37:10.577 direct=1 00:37:10.577 bs=4096 00:37:10.577 iodepth=128 00:37:10.577 norandommap=0 00:37:10.577 numjobs=1 00:37:10.577 00:37:10.577 verify_dump=1 00:37:10.577 verify_backlog=512 00:37:10.577 verify_state_save=0 00:37:10.577 do_verify=1 00:37:10.577 verify=crc32c-intel 00:37:10.577 [job0] 00:37:10.577 filename=/dev/nvme0n1 00:37:10.577 [job1] 00:37:10.577 filename=/dev/nvme0n2 00:37:10.577 [job2] 00:37:10.577 filename=/dev/nvme0n3 00:37:10.577 [job3] 00:37:10.577 filename=/dev/nvme0n4 00:37:10.834 Could not set queue depth (nvme0n1) 00:37:10.834 Could not set queue depth (nvme0n2) 00:37:10.834 Could not set queue depth (nvme0n3) 00:37:10.834 Could not set queue depth (nvme0n4) 00:37:11.092 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:11.092 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:11.092 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:11.092 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:11.092 fio-3.35 00:37:11.092 Starting 4 threads 00:37:12.465 00:37:12.465 job0: (groupid=0, jobs=1): err= 0: pid=1825064: Wed Nov 20 14:56:24 2024 00:37:12.465 read: IOPS=3478, BW=13.6MiB/s (14.2MB/s)(14.2MiB/1046msec) 00:37:12.465 slat (nsec): min=1360, max=16292k, avg=129991.47, stdev=917677.92 00:37:12.465 clat (msec): min=4, max=106, avg=15.78, stdev=11.09 00:37:12.465 lat (msec): min=4, max=106, avg=15.91, stdev=11.20 00:37:12.465 clat percentiles (msec): 00:37:12.465 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:37:12.465 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 15], 00:37:12.465 | 70.00th=[ 16], 80.00th=[ 18], 90.00th=[ 22], 95.00th=[ 30], 00:37:12.465 | 99.00th=[ 82], 99.50th=[ 94], 99.90th=[ 107], 99.95th=[ 107], 00:37:12.465 | 99.99th=[ 107] 00:37:12.465 write: IOPS=3915, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1046msec); 0 zone resets 00:37:12.465 slat (usec): min=2, max=13950, avg=121.25, stdev=722.36 00:37:12.465 clat (msec): min=3, max=106, avg=18.34, stdev=13.77 00:37:12.465 lat (msec): min=3, max=106, avg=18.46, stdev=13.82 00:37:12.466 clat percentiles (msec): 00:37:12.466 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 10], 00:37:12.466 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 16], 00:37:12.466 | 70.00th=[ 18], 80.00th=[ 27], 90.00th=[ 34], 95.00th=[ 52], 00:37:12.466 | 99.00th=[ 80], 99.50th=[ 87], 99.90th=[ 93], 99.95th=[ 93], 00:37:12.466 | 99.99th=[ 107] 00:37:12.466 bw ( KiB/s): min=15824, max=16368, per=24.98%, avg=16096.00, stdev=384.67, samples=2 00:37:12.466 iops : min= 3956, max= 4092, avg=4024.00, stdev=96.17, samples=2 00:37:12.466 lat (msec) : 4=0.08%, 10=14.48%, 20=64.42%, 50=17.17%, 100=3.76% 00:37:12.466 lat (msec) : 250=0.09% 00:37:12.466 cpu : usr=3.73%, sys=4.59%, ctx=331, majf=0, minf=1 00:37:12.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:37:12.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:12.466 issued rwts: total=3639,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:12.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:12.466 job1: (groupid=0, jobs=1): err= 0: pid=1825071: Wed Nov 20 14:56:24 2024 00:37:12.466 read: IOPS=5680, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1005msec) 00:37:12.466 slat (nsec): min=1037, max=10457k, avg=82246.68, stdev=530032.15 00:37:12.466 clat (usec): min=862, max=31483, avg=10660.61, stdev=4977.80 00:37:12.466 lat (usec): min=4950, max=31491, avg=10742.85, stdev=5001.12 00:37:12.466 clat percentiles (usec): 00:37:12.466 | 1.00th=[ 5669], 5.00th=[ 6521], 10.00th=[ 7046], 20.00th=[ 7635], 00:37:12.466 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9765], 00:37:12.466 | 70.00th=[10552], 80.00th=[11469], 90.00th=[16450], 95.00th=[23987], 00:37:12.466 | 99.00th=[30540], 99.50th=[31327], 99.90th=[31589], 99.95th=[31589], 00:37:12.466 | 99.99th=[31589] 00:37:12.466 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:37:12.466 slat (usec): min=2, max=37209, avg=82.29, stdev=702.87 00:37:12.466 clat (usec): min=4377, max=45297, avg=10555.49, stdev=5423.51 00:37:12.466 lat (usec): min=4449, max=45302, avg=10637.78, stdev=5454.84 00:37:12.466 clat percentiles (usec): 00:37:12.466 | 1.00th=[ 5866], 5.00th=[ 7439], 10.00th=[ 7832], 20.00th=[ 8160], 00:37:12.466 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 9765], 60.00th=[10290], 00:37:12.466 | 70.00th=[10552], 80.00th=[11469], 90.00th=[12125], 95.00th=[13960], 00:37:12.466 | 99.00th=[42206], 99.50th=[44303], 99.90th=[45351], 99.95th=[45351], 00:37:12.466 | 99.99th=[45351] 00:37:12.466 bw ( KiB/s): min=23344, max=25400, per=37.83%, avg=24372.00, stdev=1453.81, samples=2 00:37:12.466 iops : min= 5836, max= 6350, avg=6093.00, stdev=363.45, samples=2 00:37:12.466 lat (usec) : 1000=0.01% 00:37:12.466 lat (msec) : 10=56.92%, 20=38.05%, 50=5.02% 00:37:12.466 cpu : usr=3.49%, sys=5.58%, ctx=498, majf=0, minf=1 00:37:12.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:37:12.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:12.466 issued rwts: total=5709,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:12.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:12.466 job2: (groupid=0, jobs=1): err= 0: pid=1825080: Wed Nov 20 14:56:24 2024 00:37:12.466 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:37:12.466 slat (nsec): min=1240, max=22108k, avg=146430.59, stdev=1188157.27 00:37:12.466 clat (usec): min=4513, max=77436, avg=19561.67, stdev=14359.02 00:37:12.466 lat (usec): min=4521, max=81809, avg=19708.10, stdev=14481.76 00:37:12.466 clat percentiles (usec): 00:37:12.466 | 1.00th=[ 4555], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[10683], 00:37:12.466 | 30.00th=[12256], 40.00th=[13304], 50.00th=[14615], 60.00th=[17171], 00:37:12.466 | 70.00th=[19006], 80.00th=[20317], 90.00th=[41681], 95.00th=[57934], 00:37:12.466 | 99.00th=[73925], 99.50th=[73925], 99.90th=[77071], 99.95th=[77071], 00:37:12.466 | 99.99th=[77071] 00:37:12.466 write: IOPS=2810, BW=11.0MiB/s (11.5MB/s)(11.1MiB/1007msec); 0 zone resets 00:37:12.466 slat (nsec): min=1872, max=25413k, avg=193286.12, stdev=1238560.27 00:37:12.466 clat (usec): min=961, max=76207, avg=27417.84, stdev=16827.54 00:37:12.466 lat (usec): min=1241, max=76217, avg=27611.13, stdev=16892.12 00:37:12.466 clat percentiles (usec): 00:37:12.466 | 1.00th=[ 6783], 5.00th=[ 8356], 10.00th=[11338], 20.00th=[12387], 00:37:12.466 | 30.00th=[15139], 40.00th=[17695], 50.00th=[21103], 60.00th=[27132], 00:37:12.466 | 70.00th=[33162], 80.00th=[46400], 90.00th=[53216], 95.00th=[58983], 00:37:12.466 | 99.00th=[76022], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:37:12.466 | 99.99th=[76022] 00:37:12.466 bw ( KiB/s): min=10088, max=11528, per=16.78%, avg=10808.00, stdev=1018.23, samples=2 00:37:12.466 iops : min= 2522, max= 2882, avg=2702.00, stdev=254.56, samples=2 00:37:12.466 lat (usec) : 1000=0.02% 00:37:12.466 lat (msec) : 2=0.04%, 10=7.51%, 20=55.34%, 50=26.40%, 100=10.69% 00:37:12.466 cpu : usr=1.89%, sys=3.58%, ctx=247, majf=0, minf=2 00:37:12.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:37:12.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:12.466 issued rwts: total=2560,2830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:12.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:12.466 job3: (groupid=0, jobs=1): err= 0: pid=1825085: Wed Nov 20 14:56:24 2024 00:37:12.466 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:37:12.466 slat (nsec): min=1171, max=19497k, avg=119663.11, stdev=834219.61 00:37:12.466 clat (usec): min=1119, max=49414, avg=14951.79, stdev=6871.42 00:37:12.466 lat (usec): min=1130, max=49421, avg=15071.45, stdev=6926.31 00:37:12.466 clat percentiles (usec): 00:37:12.466 | 1.00th=[ 1696], 5.00th=[ 3326], 10.00th=[ 8225], 20.00th=[11994], 00:37:12.466 | 30.00th=[12649], 40.00th=[13304], 50.00th=[13960], 60.00th=[14746], 00:37:12.466 | 70.00th=[15926], 80.00th=[17171], 90.00th=[21103], 95.00th=[31065], 00:37:12.466 | 99.00th=[40633], 99.50th=[44827], 99.90th=[49546], 99.95th=[49546], 00:37:12.466 | 99.99th=[49546] 00:37:12.466 write: IOPS=3762, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1004msec); 0 zone resets 00:37:12.466 slat (usec): min=2, max=10476, avg=139.40, stdev=722.69 00:37:12.466 clat (usec): min=414, max=103151, avg=19483.74, stdev=17142.14 00:37:12.466 lat (usec): min=436, max=103163, avg=19623.14, stdev=17247.19 00:37:12.466 clat percentiles (usec): 00:37:12.466 | 1.00th=[ 938], 5.00th=[ 4686], 10.00th=[ 6456], 20.00th=[ 10159], 00:37:12.466 | 30.00th=[ 11731], 40.00th=[ 11863], 50.00th=[ 12649], 60.00th=[ 13829], 00:37:12.466 | 70.00th=[ 17957], 80.00th=[ 30540], 90.00th=[ 36963], 95.00th=[ 51643], 00:37:12.466 | 99.00th=[ 95945], 99.50th=[ 99091], 99.90th=[103285], 99.95th=[103285], 00:37:12.466 | 99.99th=[103285] 00:37:12.466 bw ( KiB/s): min=12816, max=16384, per=22.66%, avg=14600.00, stdev=2522.96, samples=2 00:37:12.466 iops : min= 3204, max= 4096, avg=3650.00, stdev=630.74, samples=2 00:37:12.466 lat (usec) : 500=0.04%, 750=0.22%, 1000=0.33% 00:37:12.466 lat (msec) : 2=1.41%, 4=3.02%, 10=12.71%, 20=60.79%, 50=18.62% 00:37:12.466 lat (msec) : 100=2.68%, 250=0.19% 00:37:12.466 cpu : usr=2.59%, sys=4.69%, ctx=400, majf=0, minf=1 00:37:12.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:37:12.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:12.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:12.466 issued rwts: total=3584,3778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:12.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:12.466 00:37:12.466 Run status group 0 (all jobs): 00:37:12.466 READ: bw=57.9MiB/s (60.7MB/s), 9.93MiB/s-22.2MiB/s (10.4MB/s-23.3MB/s), io=60.5MiB (63.5MB), run=1004-1046msec 00:37:12.466 WRITE: bw=62.9MiB/s (66.0MB/s), 11.0MiB/s-23.9MiB/s (11.5MB/s-25.0MB/s), io=65.8MiB (69.0MB), run=1004-1046msec 00:37:12.466 00:37:12.466 Disk stats (read/write): 00:37:12.466 nvme0n1: ios=3112/3328, merge=0/0, ticks=46518/58485, in_queue=105003, util=98.00% 00:37:12.466 nvme0n2: ios=5159/5468, merge=0/0, ticks=22879/20683, in_queue=43562, util=99.49% 00:37:12.466 nvme0n3: ios=1765/2048, merge=0/0, ticks=17669/19053, in_queue=36722, util=98.13% 00:37:12.466 nvme0n4: ios=2925/3072, merge=0/0, ticks=37454/59700, in_queue=97154, util=96.12% 00:37:12.466 14:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:37:12.466 14:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1825283 00:37:12.466 14:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:37:12.466 14:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:37:12.466 [global] 00:37:12.466 thread=1 00:37:12.466 invalidate=1 00:37:12.466 rw=read 00:37:12.466 time_based=1 00:37:12.466 runtime=10 00:37:12.466 ioengine=libaio 00:37:12.466 direct=1 00:37:12.466 bs=4096 00:37:12.466 iodepth=1 00:37:12.466 norandommap=1 00:37:12.466 numjobs=1 00:37:12.466 00:37:12.466 [job0] 00:37:12.466 filename=/dev/nvme0n1 00:37:12.466 [job1] 00:37:12.466 filename=/dev/nvme0n2 00:37:12.466 [job2] 00:37:12.466 filename=/dev/nvme0n3 00:37:12.466 [job3] 00:37:12.466 filename=/dev/nvme0n4 00:37:12.466 Could not set queue depth (nvme0n1) 00:37:12.466 Could not set queue depth (nvme0n2) 00:37:12.466 Could not set queue depth (nvme0n3) 00:37:12.466 Could not set queue depth (nvme0n4) 00:37:12.466 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:12.466 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:12.466 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:12.466 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:12.466 fio-3.35 00:37:12.466 Starting 4 threads 00:37:15.745 14:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:37:15.745 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=42713088, buflen=4096 00:37:15.745 fio: pid=1825523, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:15.745 14:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:37:15.745 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=51486720, buflen=4096 00:37:15.745 fio: pid=1825518, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:15.745 14:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:15.745 14:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:37:16.002 14:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:16.002 14:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:37:16.002 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1523712, buflen=4096 00:37:16.002 fio: pid=1825490, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:16.002 14:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:16.002 14:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:37:16.002 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1040384, buflen=4096 00:37:16.002 fio: pid=1825502, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:16.260 00:37:16.260 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1825490: Wed Nov 20 14:56:27 2024 00:37:16.260 read: IOPS=117, BW=469KiB/s (480kB/s)(1488KiB/3175msec) 00:37:16.260 slat (usec): min=6, max=14813, avg=49.58, stdev=766.51 00:37:16.260 clat (usec): min=256, max=41995, avg=8424.10, stdev=16281.31 00:37:16.260 lat (usec): min=263, max=55953, avg=8473.75, stdev=16381.87 00:37:16.260 clat percentiles (usec): 00:37:16.260 | 1.00th=[ 265], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 293], 00:37:16.260 | 30.00th=[ 310], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 347], 00:37:16.260 | 70.00th=[ 355], 80.00th=[ 553], 90.00th=[41157], 95.00th=[41157], 00:37:16.260 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:16.260 | 99.99th=[42206] 00:37:16.260 bw ( KiB/s): min= 93, max= 1736, per=1.75%, avg=490.17, stdev=674.67, samples=6 00:37:16.260 iops : min= 23, max= 434, avg=122.50, stdev=168.70, samples=6 00:37:16.260 lat (usec) : 500=79.36%, 750=0.54% 00:37:16.260 lat (msec) : 50=19.84% 00:37:16.260 cpu : usr=0.03%, sys=0.13%, ctx=374, majf=0, minf=2 00:37:16.260 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.260 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.260 issued rwts: total=373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.260 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:16.260 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1825502: Wed Nov 20 14:56:27 2024 00:37:16.260 read: IOPS=75, BW=301KiB/s (308kB/s)(1016KiB/3379msec) 00:37:16.260 slat (usec): min=7, max=15548, avg=126.62, stdev=1300.89 00:37:16.260 clat (usec): min=199, max=41639, avg=13085.33, stdev=18941.81 00:37:16.260 lat (usec): min=208, max=56722, avg=13157.87, stdev=19057.63 00:37:16.260 clat percentiles (usec): 00:37:16.260 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 235], 20.00th=[ 255], 00:37:16.260 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 297], 00:37:16.260 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:16.260 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:37:16.260 | 99.99th=[41681] 00:37:16.260 bw ( KiB/s): min= 136, max= 368, per=1.05%, avg=293.17, stdev=87.87, samples=6 00:37:16.260 iops : min= 34, max= 92, avg=73.17, stdev=21.86, samples=6 00:37:16.260 lat (usec) : 250=16.47%, 500=51.76% 00:37:16.260 lat (msec) : 50=31.37% 00:37:16.260 cpu : usr=0.06%, sys=0.12%, ctx=262, majf=0, minf=2 00:37:16.260 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.260 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.260 issued rwts: total=255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.260 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:16.260 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1825518: Wed Nov 20 14:56:27 2024 00:37:16.260 read: IOPS=4278, BW=16.7MiB/s (17.5MB/s)(49.1MiB/2938msec) 00:37:16.260 slat (nsec): min=6559, max=57690, avg=7610.63, stdev=933.73 00:37:16.260 clat (usec): min=194, max=600, avg=222.81, stdev=15.81 00:37:16.260 lat (usec): min=202, max=607, avg=230.42, stdev=15.84 00:37:16.260 clat percentiles (usec): 00:37:16.260 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 212], 00:37:16.260 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 223], 00:37:16.260 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 245], 95.00th=[ 251], 00:37:16.260 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 334], 99.95th=[ 445], 00:37:16.260 | 99.99th=[ 515] 00:37:16.260 bw ( KiB/s): min=17256, max=17632, per=62.52%, avg=17483.20, stdev=152.80, samples=5 00:37:16.260 iops : min= 4314, max= 4408, avg=4370.80, stdev=38.20, samples=5 00:37:16.260 lat (usec) : 250=93.80%, 500=6.16%, 750=0.02% 00:37:16.260 cpu : usr=1.23%, sys=3.95%, ctx=12572, majf=0, minf=1 00:37:16.260 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.260 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.260 issued rwts: total=12571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.260 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:16.260 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1825523: Wed Nov 20 14:56:27 2024 00:37:16.260 read: IOPS=3844, BW=15.0MiB/s (15.7MB/s)(40.7MiB/2713msec) 00:37:16.260 slat (nsec): min=6382, max=31741, avg=7268.77, stdev=881.65 00:37:16.260 clat (usec): min=215, max=656, avg=249.83, stdev= 7.57 00:37:16.260 lat (usec): min=222, max=663, avg=257.10, stdev= 7.55 00:37:16.260 clat percentiles (usec): 00:37:16.260 | 1.00th=[ 235], 5.00th=[ 241], 10.00th=[ 243], 20.00th=[ 245], 00:37:16.260 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 249], 60.00th=[ 251], 00:37:16.260 | 70.00th=[ 253], 80.00th=[ 255], 90.00th=[ 258], 95.00th=[ 260], 00:37:16.260 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 277], 99.95th=[ 293], 00:37:16.260 | 99.99th=[ 416] 00:37:16.260 bw ( KiB/s): min=15496, max=15528, per=55.48%, avg=15516.80, stdev=12.13, samples=5 00:37:16.260 iops : min= 3874, max= 3882, avg=3879.20, stdev= 3.03, samples=5 00:37:16.260 lat (usec) : 250=51.48%, 500=48.50%, 750=0.01% 00:37:16.260 cpu : usr=0.92%, sys=3.50%, ctx=10429, majf=0, minf=2 00:37:16.260 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.260 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.260 issued rwts: total=10429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.260 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:16.260 00:37:16.260 Run status group 0 (all jobs): 00:37:16.260 READ: bw=27.3MiB/s (28.6MB/s), 301KiB/s-16.7MiB/s (308kB/s-17.5MB/s), io=92.3MiB (96.8MB), run=2713-3379msec 00:37:16.260 00:37:16.260 Disk stats (read/write): 00:37:16.260 nvme0n1: ios=370/0, merge=0/0, ticks=3051/0, in_queue=3051, util=95.31% 00:37:16.260 nvme0n2: ios=280/0, merge=0/0, ticks=3786/0, in_queue=3786, util=99.31% 00:37:16.260 nvme0n3: ios=12342/0, merge=0/0, ticks=2681/0, in_queue=2681, util=96.52% 00:37:16.260 nvme0n4: ios=10094/0, merge=0/0, ticks=2481/0, in_queue=2481, util=96.48% 00:37:16.260 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:16.260 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:37:16.518 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:16.518 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:37:16.776 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:16.776 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:37:16.776 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:16.776 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:37:17.033 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:37:17.033 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1825283 00:37:17.033 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:37:17.033 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:17.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:17.290 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:17.290 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:37:17.290 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:37:17.290 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:17.290 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:37:17.290 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:17.290 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:37:17.290 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:37:17.290 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:37:17.290 nvmf hotplug test: fio failed as expected 00:37:17.290 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:17.547 rmmod nvme_tcp 00:37:17.547 rmmod nvme_fabrics 00:37:17.547 rmmod nvme_keyring 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1822839 ']' 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1822839 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1822839 ']' 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1822839 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1822839 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:17.547 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:17.548 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1822839' 00:37:17.548 killing process with pid 1822839 00:37:17.548 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1822839 00:37:17.548 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1822839 00:37:17.806 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:17.806 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:17.806 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:17.806 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:37:17.806 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:37:17.806 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:17.806 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:37:17.806 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:17.806 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:17.806 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:17.806 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:17.806 14:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:20.338 00:37:20.338 real 0m25.744s 00:37:20.338 user 1m29.213s 00:37:20.338 sys 0m11.215s 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:20.338 ************************************ 00:37:20.338 END TEST nvmf_fio_target 00:37:20.338 ************************************ 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:20.338 ************************************ 00:37:20.338 START TEST nvmf_bdevio 00:37:20.338 ************************************ 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:37:20.338 * Looking for test storage... 00:37:20.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:20.338 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:20.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.339 --rc genhtml_branch_coverage=1 00:37:20.339 --rc genhtml_function_coverage=1 00:37:20.339 --rc genhtml_legend=1 00:37:20.339 --rc geninfo_all_blocks=1 00:37:20.339 --rc geninfo_unexecuted_blocks=1 00:37:20.339 00:37:20.339 ' 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:20.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.339 --rc genhtml_branch_coverage=1 00:37:20.339 --rc genhtml_function_coverage=1 00:37:20.339 --rc genhtml_legend=1 00:37:20.339 --rc geninfo_all_blocks=1 00:37:20.339 --rc geninfo_unexecuted_blocks=1 00:37:20.339 00:37:20.339 ' 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:20.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.339 --rc genhtml_branch_coverage=1 00:37:20.339 --rc genhtml_function_coverage=1 00:37:20.339 --rc genhtml_legend=1 00:37:20.339 --rc geninfo_all_blocks=1 00:37:20.339 --rc geninfo_unexecuted_blocks=1 00:37:20.339 00:37:20.339 ' 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:20.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.339 --rc genhtml_branch_coverage=1 00:37:20.339 --rc genhtml_function_coverage=1 00:37:20.339 --rc genhtml_legend=1 00:37:20.339 --rc geninfo_all_blocks=1 00:37:20.339 --rc geninfo_unexecuted_blocks=1 00:37:20.339 00:37:20.339 ' 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:37:20.339 14:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:25.612 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:25.612 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:25.612 Found net devices under 0000:86:00.0: cvl_0_0 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:25.612 Found net devices under 0000:86:00.1: cvl_0_1 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:25.612 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:25.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:25.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:37:25.872 00:37:25.872 --- 10.0.0.2 ping statistics --- 00:37:25.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:25.872 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:25.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:25.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:37:25.872 00:37:25.872 --- 10.0.0.1 ping statistics --- 00:37:25.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:25.872 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1829791 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1829791 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1829791 ']' 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:25.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:25.872 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:26.131 [2024-11-20 14:56:37.859573] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:26.131 [2024-11-20 14:56:37.860539] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:37:26.131 [2024-11-20 14:56:37.860576] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:26.131 [2024-11-20 14:56:37.939154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:26.131 [2024-11-20 14:56:37.981476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:26.131 [2024-11-20 14:56:37.981516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:26.131 [2024-11-20 14:56:37.981523] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:26.131 [2024-11-20 14:56:37.981530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:26.131 [2024-11-20 14:56:37.981534] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:26.131 [2024-11-20 14:56:37.983010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:26.131 [2024-11-20 14:56:37.983101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:26.131 [2024-11-20 14:56:37.983228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:26.131 [2024-11-20 14:56:37.983228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:26.131 [2024-11-20 14:56:38.051802] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:26.131 [2024-11-20 14:56:38.052377] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:26.131 [2024-11-20 14:56:38.053072] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:26.131 [2024-11-20 14:56:38.053173] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:26.131 [2024-11-20 14:56:38.053310] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:26.131 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:26.131 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:37:26.131 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:26.131 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:26.131 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:26.389 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:26.389 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:26.389 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.389 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:26.389 [2024-11-20 14:56:38.119895] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:26.389 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.389 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:26.389 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.389 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:26.389 Malloc0 00:37:26.389 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.389 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:26.389 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.389 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:26.390 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.390 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:26.390 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.390 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:26.390 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.390 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:26.390 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.390 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:26.390 [2024-11-20 14:56:38.200051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:26.390 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.390 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:37:26.390 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:37:26.390 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:37:26.390 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:37:26.390 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:26.390 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:26.390 { 00:37:26.390 "params": { 00:37:26.390 "name": "Nvme$subsystem", 00:37:26.390 "trtype": "$TEST_TRANSPORT", 00:37:26.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:26.390 "adrfam": "ipv4", 00:37:26.390 "trsvcid": "$NVMF_PORT", 00:37:26.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:26.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:26.390 "hdgst": ${hdgst:-false}, 00:37:26.390 "ddgst": ${ddgst:-false} 00:37:26.390 }, 00:37:26.390 "method": "bdev_nvme_attach_controller" 00:37:26.390 } 00:37:26.390 EOF 00:37:26.390 )") 00:37:26.390 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:37:26.390 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:37:26.390 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:37:26.390 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:26.390 "params": { 00:37:26.390 "name": "Nvme1", 00:37:26.390 "trtype": "tcp", 00:37:26.390 "traddr": "10.0.0.2", 00:37:26.390 "adrfam": "ipv4", 00:37:26.390 "trsvcid": "4420", 00:37:26.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:26.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:26.390 "hdgst": false, 00:37:26.390 "ddgst": false 00:37:26.390 }, 00:37:26.390 "method": "bdev_nvme_attach_controller" 00:37:26.390 }' 00:37:26.390 [2024-11-20 14:56:38.253958] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:37:26.390 [2024-11-20 14:56:38.254027] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1829859 ] 00:37:26.390 [2024-11-20 14:56:38.333033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:26.647 [2024-11-20 14:56:38.377634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:26.647 [2024-11-20 14:56:38.377740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:26.647 [2024-11-20 14:56:38.377741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:26.904 I/O targets: 00:37:26.904 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:37:26.904 00:37:26.904 00:37:26.904 CUnit - A unit testing framework for C - Version 2.1-3 00:37:26.904 http://cunit.sourceforge.net/ 00:37:26.904 00:37:26.904 00:37:26.904 Suite: bdevio tests on: Nvme1n1 00:37:26.904 Test: blockdev write read block ...passed 00:37:26.904 Test: blockdev write zeroes read block ...passed 00:37:26.904 Test: blockdev write zeroes read no split ...passed 00:37:26.904 Test: blockdev write zeroes read split ...passed 00:37:27.161 Test: blockdev write zeroes read split partial ...passed 00:37:27.161 Test: blockdev reset ...[2024-11-20 14:56:38.877512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:37:27.161 [2024-11-20 14:56:38.877576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b3340 (9): Bad file descriptor 00:37:27.161 [2024-11-20 14:56:38.930125] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:37:27.161 passed 00:37:27.161 Test: blockdev write read 8 blocks ...passed 00:37:27.161 Test: blockdev write read size > 128k ...passed 00:37:27.161 Test: blockdev write read invalid size ...passed 00:37:27.161 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:27.161 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:27.161 Test: blockdev write read max offset ...passed 00:37:27.161 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:27.418 Test: blockdev writev readv 8 blocks ...passed 00:37:27.418 Test: blockdev writev readv 30 x 1block ...passed 00:37:27.418 Test: blockdev writev readv block ...passed 00:37:27.418 Test: blockdev writev readv size > 128k ...passed 00:37:27.418 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:27.418 Test: blockdev comparev and writev ...[2024-11-20 14:56:39.180905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:27.418 [2024-11-20 14:56:39.180938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:27.418 [2024-11-20 14:56:39.180958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:27.418 [2024-11-20 14:56:39.180967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:27.418 [2024-11-20 14:56:39.181269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:27.418 [2024-11-20 14:56:39.181279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:27.418 [2024-11-20 14:56:39.181290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:27.418 [2024-11-20 14:56:39.181298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:27.418 [2024-11-20 14:56:39.181576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:27.418 [2024-11-20 14:56:39.181587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:27.418 [2024-11-20 14:56:39.181599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:27.418 [2024-11-20 14:56:39.181606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:27.418 [2024-11-20 14:56:39.181909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:27.418 [2024-11-20 14:56:39.181921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:27.418 [2024-11-20 14:56:39.181934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:27.418 [2024-11-20 14:56:39.181941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:27.418 passed 00:37:27.418 Test: blockdev nvme passthru rw ...passed 00:37:27.418 Test: blockdev nvme passthru vendor specific ...[2024-11-20 14:56:39.264204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:27.418 [2024-11-20 14:56:39.264222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:27.418 [2024-11-20 14:56:39.264339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:27.418 [2024-11-20 14:56:39.264349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:27.419 [2024-11-20 14:56:39.264460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:27.419 [2024-11-20 14:56:39.264469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:27.419 [2024-11-20 14:56:39.264590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:27.419 [2024-11-20 14:56:39.264599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:27.419 passed 00:37:27.419 Test: blockdev nvme admin passthru ...passed 00:37:27.419 Test: blockdev copy ...passed 00:37:27.419 00:37:27.419 Run Summary: Type Total Ran Passed Failed Inactive 00:37:27.419 suites 1 1 n/a 0 0 00:37:27.419 tests 23 23 23 0 0 00:37:27.419 asserts 152 152 152 0 n/a 00:37:27.419 00:37:27.419 Elapsed time = 1.185 seconds 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:27.676 rmmod nvme_tcp 00:37:27.676 rmmod nvme_fabrics 00:37:27.676 rmmod nvme_keyring 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1829791 ']' 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1829791 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1829791 ']' 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1829791 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1829791 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:37:27.676 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:37:27.677 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1829791' 00:37:27.677 killing process with pid 1829791 00:37:27.677 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1829791 00:37:27.677 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1829791 00:37:27.936 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:27.936 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:27.936 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:27.936 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:37:27.936 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:37:27.936 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:27.936 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:37:27.936 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:27.936 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:27.936 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:27.936 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:27.936 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:30.471 14:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:30.471 00:37:30.471 real 0m10.116s 00:37:30.471 user 0m9.875s 00:37:30.471 sys 0m5.342s 00:37:30.471 14:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:30.471 14:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:30.471 ************************************ 00:37:30.471 END TEST nvmf_bdevio 00:37:30.471 ************************************ 00:37:30.471 14:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:37:30.471 00:37:30.471 real 4m33.303s 00:37:30.471 user 9m6.678s 00:37:30.471 sys 1m52.515s 00:37:30.471 14:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:30.471 14:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:30.471 ************************************ 00:37:30.471 END TEST nvmf_target_core_interrupt_mode 00:37:30.471 ************************************ 00:37:30.471 14:56:41 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:37:30.471 14:56:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:30.471 14:56:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:30.471 14:56:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:30.471 ************************************ 00:37:30.471 START TEST nvmf_interrupt 00:37:30.471 ************************************ 00:37:30.471 14:56:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:37:30.471 * Looking for test storage... 00:37:30.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:30.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.471 --rc genhtml_branch_coverage=1 00:37:30.471 --rc genhtml_function_coverage=1 00:37:30.471 --rc genhtml_legend=1 00:37:30.471 --rc geninfo_all_blocks=1 00:37:30.471 --rc geninfo_unexecuted_blocks=1 00:37:30.471 00:37:30.471 ' 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:30.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.471 --rc genhtml_branch_coverage=1 00:37:30.471 --rc genhtml_function_coverage=1 00:37:30.471 --rc genhtml_legend=1 00:37:30.471 --rc geninfo_all_blocks=1 00:37:30.471 --rc geninfo_unexecuted_blocks=1 00:37:30.471 00:37:30.471 ' 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:30.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.471 --rc genhtml_branch_coverage=1 00:37:30.471 --rc genhtml_function_coverage=1 00:37:30.471 --rc genhtml_legend=1 00:37:30.471 --rc geninfo_all_blocks=1 00:37:30.471 --rc geninfo_unexecuted_blocks=1 00:37:30.471 00:37:30.471 ' 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:30.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:30.471 --rc genhtml_branch_coverage=1 00:37:30.471 --rc genhtml_function_coverage=1 00:37:30.471 --rc genhtml_legend=1 00:37:30.471 --rc geninfo_all_blocks=1 00:37:30.471 --rc geninfo_unexecuted_blocks=1 00:37:30.471 00:37:30.471 ' 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.471 14:56:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:37:30.472 14:56:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:37.040 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:37.040 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:37:37.040 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:37.040 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:37.040 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:37.040 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:37.040 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:37.040 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:37:37.040 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:37.040 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:37:37.040 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:37.041 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:37.041 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:37.041 Found net devices under 0000:86:00.0: cvl_0_0 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:37.041 Found net devices under 0000:86:00.1: cvl_0_1 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:37.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:37.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:37:37.041 00:37:37.041 --- 10.0.0.2 ping statistics --- 00:37:37.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:37.041 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:37:37.041 14:56:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:37.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:37.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:37:37.041 00:37:37.041 --- 10.0.0.1 ping statistics --- 00:37:37.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:37.041 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1833539 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1833539 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1833539 ']' 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:37.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:37.041 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:37.042 [2024-11-20 14:56:48.100583] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:37.042 [2024-11-20 14:56:48.101515] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:37:37.042 [2024-11-20 14:56:48.101549] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:37.042 [2024-11-20 14:56:48.180797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:37.042 [2024-11-20 14:56:48.221886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:37.042 [2024-11-20 14:56:48.221924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:37.042 [2024-11-20 14:56:48.221931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:37.042 [2024-11-20 14:56:48.221937] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:37.042 [2024-11-20 14:56:48.221945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:37.042 [2024-11-20 14:56:48.223158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:37.042 [2024-11-20 14:56:48.223160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:37.042 [2024-11-20 14:56:48.290591] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:37.042 [2024-11-20 14:56:48.291144] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:37.042 [2024-11-20 14:56:48.291386] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:37:37.042 5000+0 records in 00:37:37.042 5000+0 records out 00:37:37.042 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0171251 s, 598 MB/s 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:37.042 AIO0 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:37.042 [2024-11-20 14:56:48.408013] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:37.042 [2024-11-20 14:56:48.448275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1833539 0 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1833539 0 idle 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1833539 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1833539 -w 256 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1833539 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.25 reactor_0' 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1833539 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.25 reactor_0 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1833539 1 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1833539 1 idle 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1833539 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1833539 -w 256 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1833585 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1833585 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1833639 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1833539 0 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1833539 0 busy 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1833539 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:37.042 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:37.043 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:37.043 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1833539 -w 256 00:37:37.043 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:37.301 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1833539 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.44 reactor_0' 00:37:37.301 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1833539 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.44 reactor_0 00:37:37.301 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:37.301 14:56:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1833539 1 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1833539 1 busy 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1833539 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1833539 -w 256 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1833585 root 20 0 128.2g 47616 34560 R 93.8 0.0 0:00.27 reactor_1' 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1833585 root 20 0 128.2g 47616 34560 R 93.8 0.0 0:00.27 reactor_1 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:37.301 14:56:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1833639 00:37:47.267 Initializing NVMe Controllers 00:37:47.267 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:47.267 Controller IO queue size 256, less than required. 00:37:47.267 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:47.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:47.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:47.267 Initialization complete. Launching workers. 00:37:47.267 ======================================================== 00:37:47.267 Latency(us) 00:37:47.267 Device Information : IOPS MiB/s Average min max 00:37:47.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 15944.86 62.28 16063.33 3075.50 24341.14 00:37:47.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16067.86 62.77 15936.34 8061.81 26302.63 00:37:47.267 ======================================================== 00:37:47.267 Total : 32012.71 125.05 15999.59 3075.50 26302.63 00:37:47.267 00:37:47.267 14:56:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:47.267 14:56:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1833539 0 00:37:47.267 14:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1833539 0 idle 00:37:47.267 14:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1833539 00:37:47.267 14:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:47.267 14:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:47.267 14:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:47.267 14:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:47.267 14:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:47.267 14:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:47.267 14:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:47.267 14:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:47.267 14:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:47.267 14:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1833539 -w 256 00:37:47.267 14:56:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:47.267 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1833539 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.25 reactor_0' 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1833539 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.25 reactor_0 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1833539 1 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1833539 1 idle 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1833539 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1833539 -w 256 00:37:47.268 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:47.526 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1833585 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:37:47.526 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1833585 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:37:47.526 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:47.526 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:47.526 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:47.526 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:47.526 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:47.526 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:47.526 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:47.526 14:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:47.526 14:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:47.784 14:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:37:47.784 14:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:37:47.784 14:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:37:47.784 14:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:37:47.784 14:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:37:50.317 14:57:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:37:50.317 14:57:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:37:50.317 14:57:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1833539 0 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1833539 0 idle 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1833539 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1833539 -w 256 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1833539 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.46 reactor_0' 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1833539 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.46 reactor_0 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1833539 1 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1833539 1 idle 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1833539 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1833539 -w 256 00:37:50.318 14:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1833585 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.08 reactor_1' 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1833585 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.08 reactor_1 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:50.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:50.318 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:50.318 rmmod nvme_tcp 00:37:50.318 rmmod nvme_fabrics 00:37:50.318 rmmod nvme_keyring 00:37:50.577 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:50.577 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:37:50.577 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:37:50.577 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1833539 ']' 00:37:50.577 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1833539 00:37:50.577 14:57:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1833539 ']' 00:37:50.577 14:57:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1833539 00:37:50.577 14:57:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:37:50.577 14:57:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:50.577 14:57:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1833539 00:37:50.577 14:57:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:50.577 14:57:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:50.577 14:57:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1833539' 00:37:50.577 killing process with pid 1833539 00:37:50.577 14:57:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1833539 00:37:50.577 14:57:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1833539 00:37:50.836 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:50.836 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:50.836 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:50.836 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:37:50.836 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:37:50.836 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:50.836 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:37:50.836 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:50.836 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:50.836 14:57:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:50.836 14:57:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:50.836 14:57:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:52.741 14:57:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:52.741 00:37:52.741 real 0m22.712s 00:37:52.741 user 0m39.642s 00:37:52.741 sys 0m8.347s 00:37:52.741 14:57:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:52.741 14:57:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:52.741 ************************************ 00:37:52.741 END TEST nvmf_interrupt 00:37:52.741 ************************************ 00:37:52.741 00:37:52.741 real 27m30.647s 00:37:52.741 user 57m21.765s 00:37:52.741 sys 9m37.276s 00:37:52.741 14:57:04 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:52.741 14:57:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:52.741 ************************************ 00:37:52.741 END TEST nvmf_tcp 00:37:52.741 ************************************ 00:37:52.741 14:57:04 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:37:53.000 14:57:04 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:53.000 14:57:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:53.000 14:57:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:53.000 14:57:04 -- common/autotest_common.sh@10 -- # set +x 00:37:53.000 ************************************ 00:37:53.000 START TEST spdkcli_nvmf_tcp 00:37:53.000 ************************************ 00:37:53.000 14:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:53.000 * Looking for test storage... 00:37:53.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:53.000 14:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:53.000 14:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:37:53.000 14:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:53.000 14:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:53.000 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:53.000 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:53.000 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:53.000 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:37:53.000 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:37:53.000 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:37:53.000 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:37:53.000 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:37:53.000 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:37:53.000 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:37:53.000 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:53.000 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:53.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.001 --rc genhtml_branch_coverage=1 00:37:53.001 --rc genhtml_function_coverage=1 00:37:53.001 --rc genhtml_legend=1 00:37:53.001 --rc geninfo_all_blocks=1 00:37:53.001 --rc geninfo_unexecuted_blocks=1 00:37:53.001 00:37:53.001 ' 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:53.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.001 --rc genhtml_branch_coverage=1 00:37:53.001 --rc genhtml_function_coverage=1 00:37:53.001 --rc genhtml_legend=1 00:37:53.001 --rc geninfo_all_blocks=1 00:37:53.001 --rc geninfo_unexecuted_blocks=1 00:37:53.001 00:37:53.001 ' 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:53.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.001 --rc genhtml_branch_coverage=1 00:37:53.001 --rc genhtml_function_coverage=1 00:37:53.001 --rc genhtml_legend=1 00:37:53.001 --rc geninfo_all_blocks=1 00:37:53.001 --rc geninfo_unexecuted_blocks=1 00:37:53.001 00:37:53.001 ' 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:53.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.001 --rc genhtml_branch_coverage=1 00:37:53.001 --rc genhtml_function_coverage=1 00:37:53.001 --rc genhtml_legend=1 00:37:53.001 --rc geninfo_all_blocks=1 00:37:53.001 --rc geninfo_unexecuted_blocks=1 00:37:53.001 00:37:53.001 ' 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:53.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1836412 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1836412 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1836412 ']' 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:53.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:53.001 14:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:53.261 [2024-11-20 14:57:04.960365] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:37:53.261 [2024-11-20 14:57:04.960414] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836412 ] 00:37:53.261 [2024-11-20 14:57:05.033071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:53.261 [2024-11-20 14:57:05.076952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:53.261 [2024-11-20 14:57:05.076946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:53.261 14:57:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:53.261 14:57:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:37:53.261 14:57:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:53.261 14:57:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:53.261 14:57:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:53.261 14:57:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:53.261 14:57:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:53.261 14:57:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:53.261 14:57:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:53.261 14:57:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:53.261 14:57:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:53.261 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:53.261 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:53.261 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:53.261 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:53.261 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:53.261 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:53.261 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:53.261 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:53.261 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:53.261 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:53.261 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:53.261 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:53.261 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:53.261 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:53.261 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:53.261 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:53.261 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:53.261 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:53.261 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:53.261 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:53.261 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:53.261 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:53.261 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:53.261 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:53.261 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:53.261 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:53.261 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:53.261 ' 00:37:56.550 [2024-11-20 14:57:07.903926] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:57.576 [2024-11-20 14:57:09.244490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:38:00.107 [2024-11-20 14:57:11.728228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:38:02.008 [2024-11-20 14:57:13.891043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:38:03.934 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:38:03.934 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:38:03.934 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:38:03.934 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:38:03.934 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:38:03.934 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:38:03.934 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:38:03.934 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:03.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:38:03.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:38:03.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:03.934 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:03.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:38:03.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:03.934 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:03.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:38:03.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:03.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:03.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:03.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:03.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:38:03.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:38:03.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:03.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:38:03.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:03.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:38:03.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:38:03.934 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:38:03.934 14:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:38:03.934 14:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:03.934 14:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:03.934 14:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:38:03.934 14:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:03.934 14:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:03.934 14:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:38:03.934 14:57:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:38:04.193 14:57:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:38:04.193 14:57:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:38:04.193 14:57:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:38:04.193 14:57:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:04.193 14:57:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:04.452 14:57:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:38:04.452 14:57:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:04.452 14:57:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:04.452 14:57:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:38:04.452 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:38:04.452 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:04.452 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:38:04.452 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:38:04.452 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:38:04.452 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:38:04.452 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:04.452 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:38:04.452 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:38:04.452 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:38:04.452 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:38:04.452 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:38:04.452 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:38:04.452 ' 00:38:09.720 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:38:09.720 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:38:09.720 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:09.720 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:38:09.720 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:38:09.720 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:38:09.720 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:38:09.720 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:09.720 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:38:09.720 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:38:09.720 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:38:09.720 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:38:09.720 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:38:09.720 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:38:09.979 14:57:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:38:09.979 14:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:09.979 14:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:09.979 14:57:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1836412 00:38:09.979 14:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1836412 ']' 00:38:09.979 14:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1836412 00:38:09.979 14:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:38:09.979 14:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:09.979 14:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1836412 00:38:09.979 14:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:09.979 14:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:09.979 14:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1836412' 00:38:09.979 killing process with pid 1836412 00:38:09.979 14:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1836412 00:38:09.979 14:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1836412 00:38:10.238 14:57:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:38:10.238 14:57:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:38:10.238 14:57:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1836412 ']' 00:38:10.238 14:57:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1836412 00:38:10.238 14:57:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1836412 ']' 00:38:10.238 14:57:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1836412 00:38:10.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1836412) - No such process 00:38:10.238 14:57:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1836412 is not found' 00:38:10.238 Process with pid 1836412 is not found 00:38:10.238 14:57:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:38:10.238 14:57:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:38:10.238 14:57:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:38:10.238 00:38:10.238 real 0m17.323s 00:38:10.238 user 0m38.151s 00:38:10.238 sys 0m0.801s 00:38:10.238 14:57:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:10.238 14:57:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:10.238 ************************************ 00:38:10.238 END TEST spdkcli_nvmf_tcp 00:38:10.238 ************************************ 00:38:10.238 14:57:22 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:10.238 14:57:22 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:10.238 14:57:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:10.238 14:57:22 -- common/autotest_common.sh@10 -- # set +x 00:38:10.238 ************************************ 00:38:10.238 START TEST nvmf_identify_passthru 00:38:10.238 ************************************ 00:38:10.238 14:57:22 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:10.238 * Looking for test storage... 00:38:10.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:10.238 14:57:22 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:10.238 14:57:22 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:38:10.238 14:57:22 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:10.498 14:57:22 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:10.498 14:57:22 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:38:10.498 14:57:22 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:10.498 14:57:22 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:10.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:10.498 --rc genhtml_branch_coverage=1 00:38:10.498 --rc genhtml_function_coverage=1 00:38:10.498 --rc genhtml_legend=1 00:38:10.498 --rc geninfo_all_blocks=1 00:38:10.498 --rc geninfo_unexecuted_blocks=1 00:38:10.498 00:38:10.498 ' 00:38:10.498 14:57:22 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:10.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:10.498 --rc genhtml_branch_coverage=1 00:38:10.498 --rc genhtml_function_coverage=1 00:38:10.498 --rc genhtml_legend=1 00:38:10.498 --rc geninfo_all_blocks=1 00:38:10.498 --rc geninfo_unexecuted_blocks=1 00:38:10.498 00:38:10.499 ' 00:38:10.499 14:57:22 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:10.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:10.499 --rc genhtml_branch_coverage=1 00:38:10.499 --rc genhtml_function_coverage=1 00:38:10.499 --rc genhtml_legend=1 00:38:10.499 --rc geninfo_all_blocks=1 00:38:10.499 --rc geninfo_unexecuted_blocks=1 00:38:10.499 00:38:10.499 ' 00:38:10.499 14:57:22 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:10.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:10.499 --rc genhtml_branch_coverage=1 00:38:10.499 --rc genhtml_function_coverage=1 00:38:10.499 --rc genhtml_legend=1 00:38:10.499 --rc geninfo_all_blocks=1 00:38:10.499 --rc geninfo_unexecuted_blocks=1 00:38:10.499 00:38:10.499 ' 00:38:10.499 14:57:22 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:10.499 14:57:22 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:38:10.499 14:57:22 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:10.499 14:57:22 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:10.499 14:57:22 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:10.499 14:57:22 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.499 14:57:22 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.499 14:57:22 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.499 14:57:22 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:10.499 14:57:22 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:10.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:10.499 14:57:22 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:10.499 14:57:22 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:38:10.499 14:57:22 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:10.499 14:57:22 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:10.499 14:57:22 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:10.499 14:57:22 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.499 14:57:22 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.499 14:57:22 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.499 14:57:22 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:10.499 14:57:22 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:10.499 14:57:22 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:10.499 14:57:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:10.499 14:57:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:10.499 14:57:22 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:38:10.499 14:57:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:17.070 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:17.070 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:17.070 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:17.071 Found net devices under 0000:86:00.0: cvl_0_0 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:17.071 Found net devices under 0000:86:00.1: cvl_0_1 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:17.071 14:57:27 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:17.071 14:57:28 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:17.071 14:57:28 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:17.071 14:57:28 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:17.071 14:57:28 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:17.071 14:57:28 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:17.071 14:57:28 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:17.071 14:57:28 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:17.071 14:57:28 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:17.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:17.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:38:17.071 00:38:17.071 --- 10.0.0.2 ping statistics --- 00:38:17.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:17.071 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:38:17.071 14:57:28 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:17.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:17.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:38:17.071 00:38:17.071 --- 10.0.0.1 ping statistics --- 00:38:17.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:17.071 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:38:17.071 14:57:28 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:17.071 14:57:28 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:38:17.071 14:57:28 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:17.071 14:57:28 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:17.071 14:57:28 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:17.071 14:57:28 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:17.071 14:57:28 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:17.071 14:57:28 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:17.071 14:57:28 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:17.071 14:57:28 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:38:17.071 14:57:28 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:17.071 14:57:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:17.071 14:57:28 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:38:17.071 14:57:28 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:38:17.071 14:57:28 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:38:17.071 14:57:28 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:38:17.071 14:57:28 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:38:17.071 14:57:28 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:38:17.071 14:57:28 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:38:17.071 14:57:28 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:17.071 14:57:28 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:17.071 14:57:28 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:38:17.071 14:57:28 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:38:17.071 14:57:28 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:38:17.071 14:57:28 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:38:17.071 14:57:28 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:38:17.071 14:57:28 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:38:17.071 14:57:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:38:17.071 14:57:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:38:17.071 14:57:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:38:21.314 14:57:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:38:21.314 14:57:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:38:21.314 14:57:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:38:21.314 14:57:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:38:24.601 14:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:38:24.601 14:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:38:24.601 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:24.601 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:24.860 14:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:38:24.860 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:24.860 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:24.860 14:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1843978 00:38:24.860 14:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:38:24.860 14:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:24.860 14:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1843978 00:38:24.860 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1843978 ']' 00:38:24.860 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:24.860 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:24.860 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:24.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:24.860 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:24.860 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:24.860 [2024-11-20 14:57:36.633676] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:38:24.860 [2024-11-20 14:57:36.633719] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:24.860 [2024-11-20 14:57:36.712567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:24.860 [2024-11-20 14:57:36.755902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:24.860 [2024-11-20 14:57:36.755939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:24.860 [2024-11-20 14:57:36.755951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:24.860 [2024-11-20 14:57:36.755958] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:24.860 [2024-11-20 14:57:36.755981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:24.860 [2024-11-20 14:57:36.757440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:24.860 [2024-11-20 14:57:36.757554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:24.860 [2024-11-20 14:57:36.757660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:24.860 [2024-11-20 14:57:36.757662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:24.861 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:24.861 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:38:24.861 14:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:38:24.861 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.861 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:24.861 INFO: Log level set to 20 00:38:24.861 INFO: Requests: 00:38:24.861 { 00:38:24.861 "jsonrpc": "2.0", 00:38:24.861 "method": "nvmf_set_config", 00:38:24.861 "id": 1, 00:38:24.861 "params": { 00:38:24.861 "admin_cmd_passthru": { 00:38:24.861 "identify_ctrlr": true 00:38:24.861 } 00:38:24.861 } 00:38:24.861 } 00:38:24.861 00:38:24.861 INFO: response: 00:38:24.861 { 00:38:24.861 "jsonrpc": "2.0", 00:38:24.861 "id": 1, 00:38:24.861 "result": true 00:38:24.861 } 00:38:24.861 00:38:24.861 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.861 14:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:38:24.861 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.861 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:24.861 INFO: Setting log level to 20 00:38:24.861 INFO: Setting log level to 20 00:38:24.861 INFO: Log level set to 20 00:38:24.861 INFO: Log level set to 20 00:38:24.861 INFO: Requests: 00:38:24.861 { 00:38:24.861 "jsonrpc": "2.0", 00:38:24.861 "method": "framework_start_init", 00:38:24.861 "id": 1 00:38:24.861 } 00:38:24.861 00:38:24.861 INFO: Requests: 00:38:24.861 { 00:38:24.861 "jsonrpc": "2.0", 00:38:24.861 "method": "framework_start_init", 00:38:24.861 "id": 1 00:38:24.861 } 00:38:24.861 00:38:25.119 [2024-11-20 14:57:36.874206] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:38:25.119 INFO: response: 00:38:25.119 { 00:38:25.119 "jsonrpc": "2.0", 00:38:25.119 "id": 1, 00:38:25.119 "result": true 00:38:25.119 } 00:38:25.119 00:38:25.119 INFO: response: 00:38:25.119 { 00:38:25.119 "jsonrpc": "2.0", 00:38:25.119 "id": 1, 00:38:25.119 "result": true 00:38:25.119 } 00:38:25.119 00:38:25.119 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.119 14:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:25.119 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.119 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:25.119 INFO: Setting log level to 40 00:38:25.119 INFO: Setting log level to 40 00:38:25.119 INFO: Setting log level to 40 00:38:25.119 [2024-11-20 14:57:36.887546] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:25.119 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.119 14:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:38:25.119 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:25.119 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:25.119 14:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:38:25.119 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.119 14:57:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:28.402 Nvme0n1 00:38:28.402 14:57:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.402 14:57:39 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:38:28.402 14:57:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.402 14:57:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:28.402 14:57:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.402 14:57:39 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:38:28.402 14:57:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.402 14:57:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:28.402 14:57:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.402 14:57:39 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:28.402 14:57:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.402 14:57:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:28.402 [2024-11-20 14:57:39.802435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:28.402 14:57:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.402 14:57:39 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:38:28.402 14:57:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.402 14:57:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:28.402 [ 00:38:28.402 { 00:38:28.402 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:28.402 "subtype": "Discovery", 00:38:28.402 "listen_addresses": [], 00:38:28.402 "allow_any_host": true, 00:38:28.402 "hosts": [] 00:38:28.402 }, 00:38:28.402 { 00:38:28.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:28.402 "subtype": "NVMe", 00:38:28.402 "listen_addresses": [ 00:38:28.402 { 00:38:28.402 "trtype": "TCP", 00:38:28.402 "adrfam": "IPv4", 00:38:28.402 "traddr": "10.0.0.2", 00:38:28.402 "trsvcid": "4420" 00:38:28.402 } 00:38:28.402 ], 00:38:28.402 "allow_any_host": true, 00:38:28.402 "hosts": [], 00:38:28.402 "serial_number": "SPDK00000000000001", 00:38:28.402 "model_number": "SPDK bdev Controller", 00:38:28.402 "max_namespaces": 1, 00:38:28.402 "min_cntlid": 1, 00:38:28.402 "max_cntlid": 65519, 00:38:28.402 "namespaces": [ 00:38:28.402 { 00:38:28.402 "nsid": 1, 00:38:28.402 "bdev_name": "Nvme0n1", 00:38:28.402 "name": "Nvme0n1", 00:38:28.402 "nguid": "6B7946FA9FB040A7B6316FDFAB4448A2", 00:38:28.402 "uuid": "6b7946fa-9fb0-40a7-b631-6fdfab4448a2" 00:38:28.402 } 00:38:28.402 ] 00:38:28.402 } 00:38:28.402 ] 00:38:28.402 14:57:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.403 14:57:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:28.403 14:57:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:38:28.403 14:57:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:38:28.403 14:57:40 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:38:28.403 14:57:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:28.403 14:57:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:38:28.403 14:57:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:38:28.662 14:57:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:38:28.662 14:57:40 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:38:28.662 14:57:40 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:38:28.662 14:57:40 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:28.662 14:57:40 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.662 14:57:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:28.662 14:57:40 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.662 14:57:40 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:38:28.662 14:57:40 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:38:28.662 14:57:40 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:28.662 14:57:40 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:38:28.662 14:57:40 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:28.662 14:57:40 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:38:28.662 14:57:40 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:28.662 14:57:40 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:28.662 rmmod nvme_tcp 00:38:28.662 rmmod nvme_fabrics 00:38:28.662 rmmod nvme_keyring 00:38:28.662 14:57:40 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:28.662 14:57:40 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:38:28.662 14:57:40 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:38:28.662 14:57:40 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1843978 ']' 00:38:28.662 14:57:40 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1843978 00:38:28.662 14:57:40 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1843978 ']' 00:38:28.662 14:57:40 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1843978 00:38:28.662 14:57:40 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:38:28.662 14:57:40 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:28.662 14:57:40 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1843978 00:38:28.662 14:57:40 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:28.662 14:57:40 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:28.662 14:57:40 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1843978' 00:38:28.662 killing process with pid 1843978 00:38:28.662 14:57:40 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1843978 00:38:28.662 14:57:40 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1843978 00:38:30.563 14:57:42 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:30.563 14:57:42 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:30.563 14:57:42 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:30.563 14:57:42 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:38:30.563 14:57:42 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:38:30.563 14:57:42 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:30.563 14:57:42 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:38:30.563 14:57:42 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:30.563 14:57:42 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:30.563 14:57:42 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:30.563 14:57:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:30.563 14:57:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:32.465 14:57:44 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:32.465 00:38:32.465 real 0m22.023s 00:38:32.465 user 0m27.612s 00:38:32.465 sys 0m6.239s 00:38:32.465 14:57:44 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:32.465 14:57:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:32.465 ************************************ 00:38:32.465 END TEST nvmf_identify_passthru 00:38:32.465 ************************************ 00:38:32.465 14:57:44 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:32.465 14:57:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:32.465 14:57:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:32.465 14:57:44 -- common/autotest_common.sh@10 -- # set +x 00:38:32.465 ************************************ 00:38:32.465 START TEST nvmf_dif 00:38:32.465 ************************************ 00:38:32.465 14:57:44 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:32.465 * Looking for test storage... 00:38:32.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:32.465 14:57:44 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:32.465 14:57:44 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:38:32.465 14:57:44 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:32.465 14:57:44 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:38:32.465 14:57:44 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:32.465 14:57:44 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:32.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.465 --rc genhtml_branch_coverage=1 00:38:32.465 --rc genhtml_function_coverage=1 00:38:32.465 --rc genhtml_legend=1 00:38:32.465 --rc geninfo_all_blocks=1 00:38:32.465 --rc geninfo_unexecuted_blocks=1 00:38:32.465 00:38:32.465 ' 00:38:32.465 14:57:44 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:32.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.465 --rc genhtml_branch_coverage=1 00:38:32.465 --rc genhtml_function_coverage=1 00:38:32.465 --rc genhtml_legend=1 00:38:32.465 --rc geninfo_all_blocks=1 00:38:32.465 --rc geninfo_unexecuted_blocks=1 00:38:32.465 00:38:32.465 ' 00:38:32.465 14:57:44 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:32.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.465 --rc genhtml_branch_coverage=1 00:38:32.465 --rc genhtml_function_coverage=1 00:38:32.465 --rc genhtml_legend=1 00:38:32.465 --rc geninfo_all_blocks=1 00:38:32.465 --rc geninfo_unexecuted_blocks=1 00:38:32.465 00:38:32.465 ' 00:38:32.465 14:57:44 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:32.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.465 --rc genhtml_branch_coverage=1 00:38:32.465 --rc genhtml_function_coverage=1 00:38:32.465 --rc genhtml_legend=1 00:38:32.465 --rc geninfo_all_blocks=1 00:38:32.465 --rc geninfo_unexecuted_blocks=1 00:38:32.465 00:38:32.465 ' 00:38:32.465 14:57:44 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:32.465 14:57:44 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:38:32.465 14:57:44 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:32.465 14:57:44 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:32.465 14:57:44 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:32.465 14:57:44 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:32.465 14:57:44 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:32.465 14:57:44 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:32.465 14:57:44 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:32.465 14:57:44 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:32.465 14:57:44 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:32.465 14:57:44 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:32.465 14:57:44 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:38:32.465 14:57:44 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:38:32.465 14:57:44 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:32.465 14:57:44 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:32.465 14:57:44 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:32.465 14:57:44 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:32.465 14:57:44 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:32.465 14:57:44 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:32.466 14:57:44 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.466 14:57:44 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.466 14:57:44 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.466 14:57:44 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:38:32.466 14:57:44 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.466 14:57:44 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:38:32.466 14:57:44 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:32.466 14:57:44 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:32.466 14:57:44 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:32.466 14:57:44 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:32.466 14:57:44 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:32.466 14:57:44 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:32.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:32.466 14:57:44 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:32.466 14:57:44 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:32.466 14:57:44 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:32.466 14:57:44 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:38:32.466 14:57:44 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:38:32.466 14:57:44 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:38:32.466 14:57:44 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:38:32.466 14:57:44 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:38:32.466 14:57:44 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:32.466 14:57:44 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:32.466 14:57:44 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:32.466 14:57:44 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:32.466 14:57:44 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:32.466 14:57:44 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:32.466 14:57:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:32.466 14:57:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:32.466 14:57:44 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:32.466 14:57:44 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:32.466 14:57:44 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:38:32.466 14:57:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:39.061 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:39.061 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:39.061 14:57:49 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:39.061 Found net devices under 0000:86:00.0: cvl_0_0 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:39.062 Found net devices under 0000:86:00.1: cvl_0_1 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:39.062 14:57:49 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:39.062 14:57:50 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:39.062 14:57:50 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:39.062 14:57:50 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:39.062 14:57:50 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:39.062 14:57:50 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:39.062 14:57:50 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:39.062 14:57:50 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:39.062 14:57:50 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:39.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:39.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:38:39.062 00:38:39.062 --- 10.0.0.2 ping statistics --- 00:38:39.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:39.062 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:38:39.062 14:57:50 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:39.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:39.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:38:39.062 00:38:39.062 --- 10.0.0.1 ping statistics --- 00:38:39.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:39.062 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:38:39.062 14:57:50 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:39.062 14:57:50 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:38:39.062 14:57:50 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:39.062 14:57:50 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:40.968 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:38:40.968 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:38:40.968 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:38:40.968 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:38:40.968 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:38:40.968 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:38:40.968 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:38:40.968 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:38:40.968 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:38:40.968 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:38:40.968 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:38:40.968 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:38:40.968 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:38:40.969 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:38:40.969 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:38:40.969 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:38:40.969 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:38:41.227 14:57:53 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:41.227 14:57:53 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:41.227 14:57:53 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:41.227 14:57:53 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:41.227 14:57:53 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:41.227 14:57:53 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:41.227 14:57:53 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:38:41.227 14:57:53 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:38:41.227 14:57:53 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:41.227 14:57:53 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:41.227 14:57:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:41.227 14:57:53 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1849398 00:38:41.227 14:57:53 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:41.227 14:57:53 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1849398 00:38:41.227 14:57:53 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1849398 ']' 00:38:41.227 14:57:53 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:41.227 14:57:53 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:41.227 14:57:53 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:41.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:41.227 14:57:53 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:41.227 14:57:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:41.227 [2024-11-20 14:57:53.141064] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:38:41.227 [2024-11-20 14:57:53.141106] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:41.486 [2024-11-20 14:57:53.219402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.486 [2024-11-20 14:57:53.260837] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:41.486 [2024-11-20 14:57:53.260874] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:41.486 [2024-11-20 14:57:53.260881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:41.486 [2024-11-20 14:57:53.260887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:41.486 [2024-11-20 14:57:53.260892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:41.486 [2024-11-20 14:57:53.261487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:41.486 14:57:53 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:41.486 14:57:53 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:38:41.486 14:57:53 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:41.486 14:57:53 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:41.486 14:57:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:41.486 14:57:53 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:41.486 14:57:53 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:38:41.486 14:57:53 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:38:41.486 14:57:53 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.486 14:57:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:41.486 [2024-11-20 14:57:53.397020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:41.486 14:57:53 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.486 14:57:53 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:38:41.486 14:57:53 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:41.486 14:57:53 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:41.486 14:57:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:41.486 ************************************ 00:38:41.486 START TEST fio_dif_1_default 00:38:41.486 ************************************ 00:38:41.486 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:38:41.486 14:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:38:41.486 14:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:38:41.486 14:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:38:41.486 14:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:38:41.486 14:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:38:41.486 14:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:41.486 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.486 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:41.486 bdev_null0 00:38:41.486 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.486 14:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:41.486 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.486 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:41.486 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.486 14:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:41.486 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.486 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:41.486 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:41.746 [2024-11-20 14:57:53.449329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:41.746 { 00:38:41.746 "params": { 00:38:41.746 "name": "Nvme$subsystem", 00:38:41.746 "trtype": "$TEST_TRANSPORT", 00:38:41.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:41.746 "adrfam": "ipv4", 00:38:41.746 "trsvcid": "$NVMF_PORT", 00:38:41.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:41.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:41.746 "hdgst": ${hdgst:-false}, 00:38:41.746 "ddgst": ${ddgst:-false} 00:38:41.746 }, 00:38:41.746 "method": "bdev_nvme_attach_controller" 00:38:41.746 } 00:38:41.746 EOF 00:38:41.746 )") 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:41.746 "params": { 00:38:41.746 "name": "Nvme0", 00:38:41.746 "trtype": "tcp", 00:38:41.746 "traddr": "10.0.0.2", 00:38:41.746 "adrfam": "ipv4", 00:38:41.746 "trsvcid": "4420", 00:38:41.746 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:41.746 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:41.746 "hdgst": false, 00:38:41.746 "ddgst": false 00:38:41.746 }, 00:38:41.746 "method": "bdev_nvme_attach_controller" 00:38:41.746 }' 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:41.746 14:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:42.005 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:42.005 fio-3.35 00:38:42.005 Starting 1 thread 00:38:54.211 00:38:54.211 filename0: (groupid=0, jobs=1): err= 0: pid=1849763: Wed Nov 20 14:58:04 2024 00:38:54.211 read: IOPS=96, BW=385KiB/s (395kB/s)(3856KiB/10008msec) 00:38:54.211 slat (nsec): min=5990, max=33910, avg=6555.88, stdev=1318.38 00:38:54.211 clat (usec): min=40798, max=45921, avg=41508.57, stdev=571.06 00:38:54.211 lat (usec): min=40804, max=45955, avg=41515.12, stdev=571.36 00:38:54.211 clat percentiles (usec): 00:38:54.211 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:54.211 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:38:54.211 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:54.211 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:38:54.211 | 99.99th=[45876] 00:38:54.211 bw ( KiB/s): min= 351, max= 416, per=99.41%, avg=383.95, stdev=10.55, samples=20 00:38:54.211 iops : min= 87, max= 104, avg=95.95, stdev= 2.76, samples=20 00:38:54.211 lat (msec) : 50=100.00% 00:38:54.211 cpu : usr=92.73%, sys=7.03%, ctx=14, majf=0, minf=0 00:38:54.211 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:54.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.211 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:54.211 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:54.211 00:38:54.211 Run status group 0 (all jobs): 00:38:54.211 READ: bw=385KiB/s (395kB/s), 385KiB/s-385KiB/s (395kB/s-395kB/s), io=3856KiB (3949kB), run=10008-10008msec 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.211 00:38:54.211 real 0m11.271s 00:38:54.211 user 0m15.811s 00:38:54.211 sys 0m0.997s 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:54.211 ************************************ 00:38:54.211 END TEST fio_dif_1_default 00:38:54.211 ************************************ 00:38:54.211 14:58:04 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:54.211 14:58:04 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:54.211 14:58:04 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:54.211 14:58:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:54.211 ************************************ 00:38:54.211 START TEST fio_dif_1_multi_subsystems 00:38:54.211 ************************************ 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:54.211 bdev_null0 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:54.211 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:54.212 [2024-11-20 14:58:04.755305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:54.212 bdev_null1 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:54.212 { 00:38:54.212 "params": { 00:38:54.212 "name": "Nvme$subsystem", 00:38:54.212 "trtype": "$TEST_TRANSPORT", 00:38:54.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:54.212 "adrfam": "ipv4", 00:38:54.212 "trsvcid": "$NVMF_PORT", 00:38:54.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:54.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:54.212 "hdgst": ${hdgst:-false}, 00:38:54.212 "ddgst": ${ddgst:-false} 00:38:54.212 }, 00:38:54.212 "method": "bdev_nvme_attach_controller" 00:38:54.212 } 00:38:54.212 EOF 00:38:54.212 )") 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:54.212 { 00:38:54.212 "params": { 00:38:54.212 "name": "Nvme$subsystem", 00:38:54.212 "trtype": "$TEST_TRANSPORT", 00:38:54.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:54.212 "adrfam": "ipv4", 00:38:54.212 "trsvcid": "$NVMF_PORT", 00:38:54.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:54.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:54.212 "hdgst": ${hdgst:-false}, 00:38:54.212 "ddgst": ${ddgst:-false} 00:38:54.212 }, 00:38:54.212 "method": "bdev_nvme_attach_controller" 00:38:54.212 } 00:38:54.212 EOF 00:38:54.212 )") 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:54.212 "params": { 00:38:54.212 "name": "Nvme0", 00:38:54.212 "trtype": "tcp", 00:38:54.212 "traddr": "10.0.0.2", 00:38:54.212 "adrfam": "ipv4", 00:38:54.212 "trsvcid": "4420", 00:38:54.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:54.212 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:54.212 "hdgst": false, 00:38:54.212 "ddgst": false 00:38:54.212 }, 00:38:54.212 "method": "bdev_nvme_attach_controller" 00:38:54.212 },{ 00:38:54.212 "params": { 00:38:54.212 "name": "Nvme1", 00:38:54.212 "trtype": "tcp", 00:38:54.212 "traddr": "10.0.0.2", 00:38:54.212 "adrfam": "ipv4", 00:38:54.212 "trsvcid": "4420", 00:38:54.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:54.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:54.212 "hdgst": false, 00:38:54.212 "ddgst": false 00:38:54.212 }, 00:38:54.212 "method": "bdev_nvme_attach_controller" 00:38:54.212 }' 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:54.212 14:58:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:54.212 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:54.212 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:54.212 fio-3.35 00:38:54.212 Starting 2 threads 00:39:04.377 00:39:04.377 filename0: (groupid=0, jobs=1): err= 0: pid=1851695: Wed Nov 20 14:58:16 2024 00:39:04.377 read: IOPS=190, BW=763KiB/s (782kB/s)(7648KiB/10021msec) 00:39:04.377 slat (nsec): min=6128, max=24527, avg=7184.80, stdev=1834.59 00:39:04.377 clat (usec): min=397, max=42561, avg=20943.49, stdev=20504.73 00:39:04.377 lat (usec): min=403, max=42567, avg=20950.67, stdev=20504.16 00:39:04.377 clat percentiles (usec): 00:39:04.377 | 1.00th=[ 412], 5.00th=[ 420], 10.00th=[ 424], 20.00th=[ 433], 00:39:04.377 | 30.00th=[ 441], 40.00th=[ 478], 50.00th=[ 537], 60.00th=[41681], 00:39:04.377 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:39:04.377 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:39:04.377 | 99.99th=[42730] 00:39:04.377 bw ( KiB/s): min= 672, max= 832, per=49.42%, avg=763.20, stdev=29.87, samples=20 00:39:04.377 iops : min= 168, max= 208, avg=190.80, stdev= 7.47, samples=20 00:39:04.377 lat (usec) : 500=48.01%, 750=1.99% 00:39:04.377 lat (msec) : 50=50.00% 00:39:04.377 cpu : usr=96.64%, sys=3.12%, ctx=11, majf=0, minf=145 00:39:04.377 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:04.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.378 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:04.378 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:04.378 filename1: (groupid=0, jobs=1): err= 0: pid=1851696: Wed Nov 20 14:58:16 2024 00:39:04.378 read: IOPS=195, BW=782KiB/s (800kB/s)(7824KiB/10009msec) 00:39:04.378 slat (nsec): min=6092, max=27992, avg=7170.06, stdev=1763.90 00:39:04.378 clat (usec): min=398, max=42569, avg=20447.30, stdev=20468.99 00:39:04.378 lat (usec): min=405, max=42576, avg=20454.47, stdev=20468.47 00:39:04.378 clat percentiles (usec): 00:39:04.378 | 1.00th=[ 420], 5.00th=[ 437], 10.00th=[ 457], 20.00th=[ 506], 00:39:04.378 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 635], 60.00th=[41157], 00:39:04.378 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:39:04.378 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:39:04.378 | 99.99th=[42730] 00:39:04.378 bw ( KiB/s): min= 704, max= 896, per=50.52%, avg=780.80, stdev=44.53, samples=20 00:39:04.378 iops : min= 176, max= 224, avg=195.20, stdev=11.13, samples=20 00:39:04.378 lat (usec) : 500=17.02%, 750=34.10%, 1000=0.20% 00:39:04.378 lat (msec) : 50=48.67% 00:39:04.378 cpu : usr=96.90%, sys=2.86%, ctx=14, majf=0, minf=117 00:39:04.378 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:04.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:04.378 issued rwts: total=1956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:04.378 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:04.378 00:39:04.378 Run status group 0 (all jobs): 00:39:04.378 READ: bw=1544KiB/s (1581kB/s), 763KiB/s-782KiB/s (782kB/s-800kB/s), io=15.1MiB (15.8MB), run=10009-10021msec 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.378 00:39:04.378 real 0m11.551s 00:39:04.378 user 0m26.322s 00:39:04.378 sys 0m0.894s 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:04.378 14:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:04.378 ************************************ 00:39:04.378 END TEST fio_dif_1_multi_subsystems 00:39:04.378 ************************************ 00:39:04.378 14:58:16 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:39:04.378 14:58:16 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:04.378 14:58:16 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:04.378 14:58:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:04.378 ************************************ 00:39:04.378 START TEST fio_dif_rand_params 00:39:04.378 ************************************ 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:04.378 bdev_null0 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.378 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:04.637 [2024-11-20 14:58:16.353970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:04.637 { 00:39:04.637 "params": { 00:39:04.637 "name": "Nvme$subsystem", 00:39:04.637 "trtype": "$TEST_TRANSPORT", 00:39:04.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:04.637 "adrfam": "ipv4", 00:39:04.637 "trsvcid": "$NVMF_PORT", 00:39:04.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:04.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:04.637 "hdgst": ${hdgst:-false}, 00:39:04.637 "ddgst": ${ddgst:-false} 00:39:04.637 }, 00:39:04.637 "method": "bdev_nvme_attach_controller" 00:39:04.637 } 00:39:04.637 EOF 00:39:04.637 )") 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:04.637 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:04.638 14:58:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:04.638 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:04.638 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:04.638 14:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:04.638 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:39:04.638 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:04.638 14:58:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:39:04.638 14:58:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:39:04.638 14:58:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:04.638 "params": { 00:39:04.638 "name": "Nvme0", 00:39:04.638 "trtype": "tcp", 00:39:04.638 "traddr": "10.0.0.2", 00:39:04.638 "adrfam": "ipv4", 00:39:04.638 "trsvcid": "4420", 00:39:04.638 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:04.638 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:04.638 "hdgst": false, 00:39:04.638 "ddgst": false 00:39:04.638 }, 00:39:04.638 "method": "bdev_nvme_attach_controller" 00:39:04.638 }' 00:39:04.638 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:04.638 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:04.638 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:04.638 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:04.638 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:04.638 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:04.638 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:04.638 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:04.638 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:04.638 14:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:04.896 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:04.896 ... 00:39:04.896 fio-3.35 00:39:04.896 Starting 3 threads 00:39:11.461 00:39:11.461 filename0: (groupid=0, jobs=1): err= 0: pid=1853622: Wed Nov 20 14:58:22 2024 00:39:11.461 read: IOPS=313, BW=39.2MiB/s (41.1MB/s)(198MiB/5049msec) 00:39:11.461 slat (nsec): min=6298, max=27250, avg=10833.47, stdev=1919.84 00:39:11.461 clat (usec): min=2994, max=50984, avg=9521.60, stdev=5273.34 00:39:11.461 lat (usec): min=3001, max=50996, avg=9532.44, stdev=5273.36 00:39:11.461 clat percentiles (usec): 00:39:11.461 | 1.00th=[ 4113], 5.00th=[ 6325], 10.00th=[ 6783], 20.00th=[ 7832], 00:39:11.461 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:39:11.461 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[10683], 95.00th=[11338], 00:39:11.461 | 99.00th=[48497], 99.50th=[49021], 99.90th=[50070], 99.95th=[51119], 00:39:11.461 | 99.99th=[51119] 00:39:11.461 bw ( KiB/s): min=35584, max=45312, per=35.50%, avg=40473.60, stdev=3030.11, samples=10 00:39:11.461 iops : min= 278, max= 354, avg=316.20, stdev=23.67, samples=10 00:39:11.461 lat (msec) : 4=0.95%, 10=77.46%, 20=19.95%, 50=1.39%, 100=0.25% 00:39:11.461 cpu : usr=94.43%, sys=5.27%, ctx=12, majf=0, minf=9 00:39:11.461 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:11.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.461 issued rwts: total=1584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.461 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:11.461 filename0: (groupid=0, jobs=1): err= 0: pid=1853623: Wed Nov 20 14:58:22 2024 00:39:11.461 read: IOPS=280, BW=35.1MiB/s (36.8MB/s)(177MiB/5045msec) 00:39:11.461 slat (nsec): min=6364, max=33685, avg=10782.35, stdev=2045.73 00:39:11.461 clat (usec): min=3045, max=92099, avg=10644.11, stdev=7129.05 00:39:11.461 lat (usec): min=3052, max=92111, avg=10654.89, stdev=7129.26 00:39:11.461 clat percentiles (usec): 00:39:11.461 | 1.00th=[ 3982], 5.00th=[ 6259], 10.00th=[ 7242], 20.00th=[ 8356], 00:39:11.461 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:39:11.461 | 70.00th=[10552], 80.00th=[11076], 90.00th=[11994], 95.00th=[12649], 00:39:11.461 | 99.00th=[50070], 99.50th=[50594], 99.90th=[52691], 99.95th=[91751], 00:39:11.461 | 99.99th=[91751] 00:39:11.461 bw ( KiB/s): min=27648, max=41984, per=31.75%, avg=36198.40, stdev=4657.04, samples=10 00:39:11.461 iops : min= 216, max= 328, avg=282.80, stdev=36.38, samples=10 00:39:11.461 lat (msec) : 4=1.06%, 10=58.55%, 20=37.57%, 50=1.55%, 100=1.27% 00:39:11.461 cpu : usr=95.34%, sys=4.38%, ctx=11, majf=0, minf=11 00:39:11.461 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:11.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.461 issued rwts: total=1416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.461 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:11.461 filename0: (groupid=0, jobs=1): err= 0: pid=1853624: Wed Nov 20 14:58:22 2024 00:39:11.461 read: IOPS=296, BW=37.1MiB/s (38.9MB/s)(187MiB/5044msec) 00:39:11.461 slat (nsec): min=6405, max=31256, avg=10715.17, stdev=1920.47 00:39:11.461 clat (usec): min=3267, max=50704, avg=10067.22, stdev=4557.48 00:39:11.461 lat (usec): min=3274, max=50717, avg=10077.93, stdev=4557.56 00:39:11.461 clat percentiles (usec): 00:39:11.461 | 1.00th=[ 3785], 5.00th=[ 6128], 10.00th=[ 6652], 20.00th=[ 7963], 00:39:11.461 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10421], 00:39:11.461 | 70.00th=[10945], 80.00th=[11469], 90.00th=[12125], 95.00th=[12518], 00:39:11.461 | 99.00th=[45876], 99.50th=[49021], 99.90th=[50070], 99.95th=[50594], 00:39:11.461 | 99.99th=[50594] 00:39:11.461 bw ( KiB/s): min=34304, max=42240, per=33.57%, avg=38272.00, stdev=2411.33, samples=10 00:39:11.461 iops : min= 268, max= 330, avg=299.00, stdev=18.84, samples=10 00:39:11.461 lat (msec) : 4=1.74%, 10=50.63%, 20=46.49%, 50=1.00%, 100=0.13% 00:39:11.461 cpu : usr=95.34%, sys=4.38%, ctx=8, majf=0, minf=10 00:39:11.461 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:11.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.461 issued rwts: total=1497,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.461 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:11.461 00:39:11.461 Run status group 0 (all jobs): 00:39:11.461 READ: bw=111MiB/s (117MB/s), 35.1MiB/s-39.2MiB/s (36.8MB/s-41.1MB/s), io=562MiB (589MB), run=5044-5049msec 00:39:11.461 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:39:11.461 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:11.461 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:11.461 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:11.461 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:11.461 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:11.461 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.461 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:11.462 bdev_null0 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:11.462 [2024-11-20 14:58:22.634470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:11.462 bdev_null1 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:11.462 bdev_null2 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:11.462 { 00:39:11.462 "params": { 00:39:11.462 "name": "Nvme$subsystem", 00:39:11.462 "trtype": "$TEST_TRANSPORT", 00:39:11.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:11.462 "adrfam": "ipv4", 00:39:11.462 "trsvcid": "$NVMF_PORT", 00:39:11.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:11.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:11.462 "hdgst": ${hdgst:-false}, 00:39:11.462 "ddgst": ${ddgst:-false} 00:39:11.462 }, 00:39:11.462 "method": "bdev_nvme_attach_controller" 00:39:11.462 } 00:39:11.462 EOF 00:39:11.462 )") 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:39:11.462 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:11.463 { 00:39:11.463 "params": { 00:39:11.463 "name": "Nvme$subsystem", 00:39:11.463 "trtype": "$TEST_TRANSPORT", 00:39:11.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:11.463 "adrfam": "ipv4", 00:39:11.463 "trsvcid": "$NVMF_PORT", 00:39:11.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:11.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:11.463 "hdgst": ${hdgst:-false}, 00:39:11.463 "ddgst": ${ddgst:-false} 00:39:11.463 }, 00:39:11.463 "method": "bdev_nvme_attach_controller" 00:39:11.463 } 00:39:11.463 EOF 00:39:11.463 )") 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:11.463 { 00:39:11.463 "params": { 00:39:11.463 "name": "Nvme$subsystem", 00:39:11.463 "trtype": "$TEST_TRANSPORT", 00:39:11.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:11.463 "adrfam": "ipv4", 00:39:11.463 "trsvcid": "$NVMF_PORT", 00:39:11.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:11.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:11.463 "hdgst": ${hdgst:-false}, 00:39:11.463 "ddgst": ${ddgst:-false} 00:39:11.463 }, 00:39:11.463 "method": "bdev_nvme_attach_controller" 00:39:11.463 } 00:39:11.463 EOF 00:39:11.463 )") 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:11.463 "params": { 00:39:11.463 "name": "Nvme0", 00:39:11.463 "trtype": "tcp", 00:39:11.463 "traddr": "10.0.0.2", 00:39:11.463 "adrfam": "ipv4", 00:39:11.463 "trsvcid": "4420", 00:39:11.463 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:11.463 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:11.463 "hdgst": false, 00:39:11.463 "ddgst": false 00:39:11.463 }, 00:39:11.463 "method": "bdev_nvme_attach_controller" 00:39:11.463 },{ 00:39:11.463 "params": { 00:39:11.463 "name": "Nvme1", 00:39:11.463 "trtype": "tcp", 00:39:11.463 "traddr": "10.0.0.2", 00:39:11.463 "adrfam": "ipv4", 00:39:11.463 "trsvcid": "4420", 00:39:11.463 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:11.463 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:11.463 "hdgst": false, 00:39:11.463 "ddgst": false 00:39:11.463 }, 00:39:11.463 "method": "bdev_nvme_attach_controller" 00:39:11.463 },{ 00:39:11.463 "params": { 00:39:11.463 "name": "Nvme2", 00:39:11.463 "trtype": "tcp", 00:39:11.463 "traddr": "10.0.0.2", 00:39:11.463 "adrfam": "ipv4", 00:39:11.463 "trsvcid": "4420", 00:39:11.463 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:39:11.463 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:39:11.463 "hdgst": false, 00:39:11.463 "ddgst": false 00:39:11.463 }, 00:39:11.463 "method": "bdev_nvme_attach_controller" 00:39:11.463 }' 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:11.463 14:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:11.463 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:11.463 ... 00:39:11.463 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:11.463 ... 00:39:11.463 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:11.463 ... 00:39:11.463 fio-3.35 00:39:11.463 Starting 24 threads 00:39:23.688 00:39:23.688 filename0: (groupid=0, jobs=1): err= 0: pid=1854655: Wed Nov 20 14:58:34 2024 00:39:23.688 read: IOPS=575, BW=2301KiB/s (2356kB/s)(22.5MiB/10025msec) 00:39:23.688 slat (nsec): min=6936, max=81339, avg=13410.82, stdev=6548.07 00:39:23.688 clat (usec): min=1152, max=29673, avg=27706.06, stdev=4817.62 00:39:23.688 lat (usec): min=1163, max=29687, avg=27719.47, stdev=4817.89 00:39:23.688 clat percentiles (usec): 00:39:23.688 | 1.00th=[ 1434], 5.00th=[28181], 10.00th=[28443], 20.00th=[28443], 00:39:23.688 | 30.00th=[28705], 40.00th=[28705], 50.00th=[28705], 60.00th=[28705], 00:39:23.688 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:39:23.688 | 99.00th=[29230], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:39:23.688 | 99.99th=[29754] 00:39:23.688 bw ( KiB/s): min= 2176, max= 3760, per=4.35%, avg=2300.00, stdev=349.15, samples=20 00:39:23.688 iops : min= 544, max= 940, avg=575.00, stdev=87.29, samples=20 00:39:23.688 lat (msec) : 2=2.22%, 4=0.28%, 10=0.64%, 20=1.13%, 50=95.73% 00:39:23.688 cpu : usr=98.62%, sys=1.02%, ctx=11, majf=0, minf=9 00:39:23.688 IO depths : 1=6.0%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:23.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.688 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.688 issued rwts: total=5766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.688 filename0: (groupid=0, jobs=1): err= 0: pid=1854656: Wed Nov 20 14:58:34 2024 00:39:23.688 read: IOPS=549, BW=2198KiB/s (2251kB/s)(21.6MiB/10044msec) 00:39:23.688 slat (nsec): min=7378, max=73813, avg=23044.30, stdev=10776.27 00:39:23.688 clat (usec): min=27653, max=96983, avg=28905.33, stdev=4159.32 00:39:23.688 lat (usec): min=27711, max=97006, avg=28928.38, stdev=4159.26 00:39:23.688 clat percentiles (usec): 00:39:23.688 | 1.00th=[27919], 5.00th=[28443], 10.00th=[28443], 20.00th=[28443], 00:39:23.688 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28705], 60.00th=[28705], 00:39:23.688 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:39:23.688 | 99.00th=[29230], 99.50th=[65274], 99.90th=[96994], 99.95th=[96994], 00:39:23.688 | 99.99th=[96994] 00:39:23.688 bw ( KiB/s): min= 2048, max= 2304, per=4.18%, avg=2209.68, stdev=71.93, samples=19 00:39:23.688 iops : min= 512, max= 576, avg=552.42, stdev=17.98, samples=19 00:39:23.688 lat (msec) : 50=99.42%, 100=0.58% 00:39:23.688 cpu : usr=98.67%, sys=0.97%, ctx=14, majf=0, minf=9 00:39:23.688 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:23.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.688 issued rwts: total=5520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.688 filename0: (groupid=0, jobs=1): err= 0: pid=1854657: Wed Nov 20 14:58:34 2024 00:39:23.688 read: IOPS=549, BW=2198KiB/s (2251kB/s)(21.6MiB/10074msec) 00:39:23.688 slat (nsec): min=4640, max=91016, avg=32123.44, stdev=14836.42 00:39:23.688 clat (msec): min=27, max=116, avg=28.81, stdev= 4.85 00:39:23.688 lat (msec): min=27, max=116, avg=28.85, stdev= 4.85 00:39:23.688 clat percentiles (msec): 00:39:23.688 | 1.00th=[ 28], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:39:23.688 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:39:23.688 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:39:23.688 | 99.00th=[ 30], 99.50th=[ 49], 99.90th=[ 117], 99.95th=[ 117], 00:39:23.688 | 99.99th=[ 117] 00:39:23.688 bw ( KiB/s): min= 2048, max= 2304, per=4.19%, avg=2216.42, stdev=74.55, samples=19 00:39:23.688 iops : min= 512, max= 576, avg=554.11, stdev=18.64, samples=19 00:39:23.688 lat (msec) : 50=99.71%, 250=0.29% 00:39:23.688 cpu : usr=98.69%, sys=0.94%, ctx=14, majf=0, minf=9 00:39:23.688 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:23.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.688 issued rwts: total=5536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.688 filename0: (groupid=0, jobs=1): err= 0: pid=1854658: Wed Nov 20 14:58:34 2024 00:39:23.688 read: IOPS=549, BW=2198KiB/s (2251kB/s)(21.6MiB/10073msec) 00:39:23.688 slat (nsec): min=7074, max=94580, avg=33640.39, stdev=16978.32 00:39:23.688 clat (msec): min=20, max=116, avg=28.80, stdev= 4.88 00:39:23.688 lat (msec): min=20, max=116, avg=28.83, stdev= 4.88 00:39:23.688 clat percentiles (msec): 00:39:23.688 | 1.00th=[ 28], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:39:23.688 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:39:23.688 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:39:23.688 | 99.00th=[ 36], 99.50th=[ 48], 99.90th=[ 116], 99.95th=[ 116], 00:39:23.688 | 99.99th=[ 117] 00:39:23.688 bw ( KiB/s): min= 2048, max= 2304, per=4.19%, avg=2216.42, stdev=74.55, samples=19 00:39:23.688 iops : min= 512, max= 576, avg=554.11, stdev=18.64, samples=19 00:39:23.688 lat (msec) : 50=99.71%, 250=0.29% 00:39:23.688 cpu : usr=98.35%, sys=1.27%, ctx=14, majf=0, minf=9 00:39:23.688 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:39:23.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.689 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.689 issued rwts: total=5536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.689 filename0: (groupid=0, jobs=1): err= 0: pid=1854659: Wed Nov 20 14:58:34 2024 00:39:23.689 read: IOPS=551, BW=2206KiB/s (2259kB/s)(21.8MiB/10096msec) 00:39:23.689 slat (nsec): min=5059, max=81636, avg=24628.76, stdev=12623.16 00:39:23.689 clat (msec): min=17, max=111, avg=28.78, stdev= 4.45 00:39:23.689 lat (msec): min=17, max=111, avg=28.81, stdev= 4.45 00:39:23.689 clat percentiles (msec): 00:39:23.689 | 1.00th=[ 29], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:39:23.689 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:39:23.689 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:39:23.689 | 99.00th=[ 30], 99.50th=[ 30], 99.90th=[ 111], 99.95th=[ 111], 00:39:23.689 | 99.99th=[ 112] 00:39:23.689 bw ( KiB/s): min= 2116, max= 2304, per=4.19%, avg=2217.80, stdev=66.23, samples=20 00:39:23.689 iops : min= 529, max= 576, avg=554.45, stdev=16.56, samples=20 00:39:23.689 lat (msec) : 20=0.29%, 50=99.43%, 250=0.29% 00:39:23.689 cpu : usr=98.52%, sys=1.11%, ctx=13, majf=0, minf=9 00:39:23.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:23.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.689 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.689 filename0: (groupid=0, jobs=1): err= 0: pid=1854660: Wed Nov 20 14:58:34 2024 00:39:23.689 read: IOPS=550, BW=2202KiB/s (2254kB/s)(21.7MiB/10087msec) 00:39:23.689 slat (nsec): min=6352, max=65927, avg=22363.39, stdev=7774.43 00:39:23.689 clat (msec): min=28, max=111, avg=28.86, stdev= 4.44 00:39:23.689 lat (msec): min=28, max=111, avg=28.89, stdev= 4.44 00:39:23.689 clat percentiles (msec): 00:39:23.689 | 1.00th=[ 29], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:39:23.689 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:39:23.689 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:39:23.689 | 99.00th=[ 30], 99.50th=[ 37], 99.90th=[ 111], 99.95th=[ 112], 00:39:23.689 | 99.99th=[ 112] 00:39:23.689 bw ( KiB/s): min= 2150, max= 2304, per=4.18%, avg=2213.10, stdev=61.32, samples=20 00:39:23.689 iops : min= 537, max= 576, avg=553.25, stdev=15.36, samples=20 00:39:23.689 lat (msec) : 50=99.71%, 250=0.29% 00:39:23.689 cpu : usr=98.39%, sys=1.19%, ctx=30, majf=0, minf=9 00:39:23.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:23.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.689 issued rwts: total=5552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.689 filename0: (groupid=0, jobs=1): err= 0: pid=1854661: Wed Nov 20 14:58:34 2024 00:39:23.689 read: IOPS=555, BW=2224KiB/s (2277kB/s)(21.9MiB/10084msec) 00:39:23.689 slat (nsec): min=7196, max=54036, avg=20149.27, stdev=5926.96 00:39:23.689 clat (usec): min=8271, max=96860, avg=28602.34, stdev=4070.89 00:39:23.689 lat (usec): min=8296, max=96877, avg=28622.49, stdev=4070.99 00:39:23.689 clat percentiles (usec): 00:39:23.689 | 1.00th=[18220], 5.00th=[28443], 10.00th=[28443], 20.00th=[28443], 00:39:23.689 | 30.00th=[28443], 40.00th=[28705], 50.00th=[28705], 60.00th=[28705], 00:39:23.689 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:39:23.689 | 99.00th=[29230], 99.50th=[29492], 99.90th=[96994], 99.95th=[96994], 00:39:23.689 | 99.99th=[96994] 00:39:23.689 bw ( KiB/s): min= 2176, max= 2480, per=4.22%, avg=2236.00, stdev=84.33, samples=20 00:39:23.689 iops : min= 544, max= 620, avg=559.00, stdev=21.08, samples=20 00:39:23.689 lat (msec) : 10=0.25%, 20=1.07%, 50=98.39%, 100=0.29% 00:39:23.689 cpu : usr=98.69%, sys=0.95%, ctx=11, majf=0, minf=9 00:39:23.689 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:23.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.689 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.689 issued rwts: total=5606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.689 filename0: (groupid=0, jobs=1): err= 0: pid=1854662: Wed Nov 20 14:58:34 2024 00:39:23.689 read: IOPS=555, BW=2221KiB/s (2275kB/s)(21.9MiB/10084msec) 00:39:23.689 slat (nsec): min=7072, max=57236, avg=12348.40, stdev=4513.05 00:39:23.689 clat (usec): min=8310, max=96459, avg=28706.11, stdev=4162.07 00:39:23.689 lat (usec): min=8343, max=96474, avg=28718.46, stdev=4161.62 00:39:23.689 clat percentiles (usec): 00:39:23.689 | 1.00th=[15139], 5.00th=[28443], 10.00th=[28443], 20.00th=[28705], 00:39:23.689 | 30.00th=[28705], 40.00th=[28705], 50.00th=[28705], 60.00th=[28705], 00:39:23.689 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:39:23.689 | 99.00th=[29492], 99.50th=[38011], 99.90th=[95945], 99.95th=[95945], 00:39:23.689 | 99.99th=[95945] 00:39:23.689 bw ( KiB/s): min= 2176, max= 2432, per=4.22%, avg=2233.60, stdev=77.42, samples=20 00:39:23.689 iops : min= 544, max= 608, avg=558.40, stdev=19.35, samples=20 00:39:23.689 lat (msec) : 10=0.36%, 20=1.11%, 50=98.25%, 100=0.29% 00:39:23.689 cpu : usr=98.61%, sys=1.02%, ctx=13, majf=0, minf=9 00:39:23.689 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:39:23.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.689 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.689 filename1: (groupid=0, jobs=1): err= 0: pid=1854663: Wed Nov 20 14:58:34 2024 00:39:23.689 read: IOPS=551, BW=2207KiB/s (2260kB/s)(21.7MiB/10061msec) 00:39:23.689 slat (nsec): min=5303, max=42504, avg=20008.50, stdev=5955.61 00:39:23.689 clat (usec): min=18387, max=96987, avg=28811.06, stdev=3675.57 00:39:23.689 lat (usec): min=18396, max=97025, avg=28831.07, stdev=3675.75 00:39:23.689 clat percentiles (usec): 00:39:23.689 | 1.00th=[28181], 5.00th=[28443], 10.00th=[28443], 20.00th=[28443], 00:39:23.689 | 30.00th=[28443], 40.00th=[28705], 50.00th=[28705], 60.00th=[28705], 00:39:23.689 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:39:23.689 | 99.00th=[29230], 99.50th=[29492], 99.90th=[96994], 99.95th=[96994], 00:39:23.689 | 99.99th=[96994] 00:39:23.689 bw ( KiB/s): min= 2000, max= 2304, per=4.18%, avg=2212.00, stdev=79.39, samples=20 00:39:23.689 iops : min= 500, max= 576, avg=553.00, stdev=19.85, samples=20 00:39:23.689 lat (msec) : 20=0.04%, 50=99.68%, 100=0.29% 00:39:23.689 cpu : usr=98.44%, sys=1.20%, ctx=5, majf=0, minf=9 00:39:23.689 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:23.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.689 issued rwts: total=5552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.689 filename1: (groupid=0, jobs=1): err= 0: pid=1854664: Wed Nov 20 14:58:34 2024 00:39:23.689 read: IOPS=555, BW=2221KiB/s (2275kB/s)(21.8MiB/10066msec) 00:39:23.689 slat (nsec): min=7029, max=62092, avg=21195.42, stdev=6508.56 00:39:23.689 clat (usec): min=17289, max=96987, avg=28622.11, stdev=4046.67 00:39:23.689 lat (usec): min=17305, max=97020, avg=28643.31, stdev=4047.33 00:39:23.689 clat percentiles (usec): 00:39:23.689 | 1.00th=[18482], 5.00th=[28443], 10.00th=[28443], 20.00th=[28443], 00:39:23.689 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28705], 60.00th=[28705], 00:39:23.689 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:39:23.689 | 99.00th=[29230], 99.50th=[44303], 99.90th=[96994], 99.95th=[96994], 00:39:23.689 | 99.99th=[96994] 00:39:23.689 bw ( KiB/s): min= 2108, max= 2352, per=4.21%, avg=2226.20, stdev=72.51, samples=20 00:39:23.689 iops : min= 527, max= 588, avg=556.55, stdev=18.13, samples=20 00:39:23.689 lat (msec) : 20=1.65%, 50=98.07%, 100=0.29% 00:39:23.689 cpu : usr=98.72%, sys=0.92%, ctx=12, majf=0, minf=9 00:39:23.689 IO depths : 1=5.8%, 2=11.9%, 4=24.4%, 8=51.2%, 16=6.7%, 32=0.0%, >=64=0.0% 00:39:23.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.689 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.689 issued rwts: total=5590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.689 filename1: (groupid=0, jobs=1): err= 0: pid=1854665: Wed Nov 20 14:58:34 2024 00:39:23.689 read: IOPS=549, BW=2197KiB/s (2250kB/s)(21.6MiB/10078msec) 00:39:23.689 slat (nsec): min=4431, max=93302, avg=32396.00, stdev=16391.06 00:39:23.689 clat (msec): min=27, max=119, avg=28.80, stdev= 4.87 00:39:23.689 lat (msec): min=27, max=119, avg=28.83, stdev= 4.87 00:39:23.689 clat percentiles (msec): 00:39:23.689 | 1.00th=[ 29], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:39:23.689 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:39:23.689 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:39:23.689 | 99.00th=[ 30], 99.50th=[ 49], 99.90th=[ 117], 99.95th=[ 117], 00:39:23.689 | 99.99th=[ 121] 00:39:23.689 bw ( KiB/s): min= 2048, max= 2304, per=4.19%, avg=2216.42, stdev=74.55, samples=19 00:39:23.689 iops : min= 512, max= 576, avg=554.11, stdev=18.64, samples=19 00:39:23.689 lat (msec) : 50=99.71%, 250=0.29% 00:39:23.689 cpu : usr=98.63%, sys=1.00%, ctx=12, majf=0, minf=9 00:39:23.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:23.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.689 issued rwts: total=5536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.689 filename1: (groupid=0, jobs=1): err= 0: pid=1854666: Wed Nov 20 14:58:34 2024 00:39:23.689 read: IOPS=550, BW=2201KiB/s (2254kB/s)(21.7MiB/10089msec) 00:39:23.689 slat (nsec): min=6385, max=81493, avg=23715.37, stdev=12946.06 00:39:23.689 clat (msec): min=28, max=112, avg=28.84, stdev= 4.45 00:39:23.689 lat (msec): min=28, max=112, avg=28.86, stdev= 4.45 00:39:23.689 clat percentiles (msec): 00:39:23.689 | 1.00th=[ 29], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:39:23.689 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:39:23.690 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:39:23.690 | 99.00th=[ 30], 99.50th=[ 37], 99.90th=[ 111], 99.95th=[ 112], 00:39:23.690 | 99.99th=[ 113] 00:39:23.690 bw ( KiB/s): min= 2150, max= 2304, per=4.18%, avg=2213.10, stdev=61.32, samples=20 00:39:23.690 iops : min= 537, max= 576, avg=553.25, stdev=15.36, samples=20 00:39:23.690 lat (msec) : 50=99.71%, 250=0.29% 00:39:23.690 cpu : usr=98.65%, sys=0.99%, ctx=5, majf=0, minf=9 00:39:23.690 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:23.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.690 issued rwts: total=5552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.690 filename1: (groupid=0, jobs=1): err= 0: pid=1854667: Wed Nov 20 14:58:34 2024 00:39:23.690 read: IOPS=559, BW=2239KiB/s (2293kB/s)(21.9MiB/10005msec) 00:39:23.690 slat (nsec): min=7156, max=81641, avg=19260.89, stdev=8851.71 00:39:23.690 clat (usec): min=8224, max=39701, avg=28430.89, stdev=1953.20 00:39:23.690 lat (usec): min=8241, max=39724, avg=28450.15, stdev=1952.90 00:39:23.690 clat percentiles (usec): 00:39:23.690 | 1.00th=[14353], 5.00th=[28443], 10.00th=[28443], 20.00th=[28443], 00:39:23.690 | 30.00th=[28443], 40.00th=[28705], 50.00th=[28705], 60.00th=[28705], 00:39:23.690 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:39:23.690 | 99.00th=[29230], 99.50th=[29230], 99.90th=[29492], 99.95th=[39584], 00:39:23.690 | 99.99th=[39584] 00:39:23.690 bw ( KiB/s): min= 2176, max= 2560, per=4.23%, avg=2236.63, stdev=98.86, samples=19 00:39:23.690 iops : min= 544, max= 640, avg=559.16, stdev=24.71, samples=19 00:39:23.690 lat (msec) : 10=0.29%, 20=1.21%, 50=98.50% 00:39:23.690 cpu : usr=98.49%, sys=1.15%, ctx=12, majf=0, minf=9 00:39:23.690 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:23.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.690 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.690 filename1: (groupid=0, jobs=1): err= 0: pid=1854668: Wed Nov 20 14:58:34 2024 00:39:23.690 read: IOPS=550, BW=2201KiB/s (2254kB/s)(21.7MiB/10082msec) 00:39:23.690 slat (nsec): min=4215, max=96078, avg=32549.40, stdev=16545.32 00:39:23.690 clat (msec): min=21, max=116, avg=28.77, stdev= 4.84 00:39:23.690 lat (msec): min=21, max=116, avg=28.81, stdev= 4.84 00:39:23.690 clat percentiles (msec): 00:39:23.690 | 1.00th=[ 25], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:39:23.690 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:39:23.690 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:39:23.690 | 99.00th=[ 35], 99.50th=[ 42], 99.90th=[ 116], 99.95th=[ 117], 00:39:23.690 | 99.99th=[ 117] 00:39:23.690 bw ( KiB/s): min= 2031, max= 2304, per=4.18%, avg=2211.95, stdev=76.22, samples=20 00:39:23.690 iops : min= 507, max= 576, avg=552.95, stdev=19.15, samples=20 00:39:23.690 lat (msec) : 50=99.71%, 250=0.29% 00:39:23.690 cpu : usr=98.70%, sys=0.92%, ctx=15, majf=0, minf=9 00:39:23.690 IO depths : 1=5.4%, 2=11.5%, 4=24.6%, 8=51.4%, 16=7.1%, 32=0.0%, >=64=0.0% 00:39:23.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.690 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.690 issued rwts: total=5548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.690 filename1: (groupid=0, jobs=1): err= 0: pid=1854669: Wed Nov 20 14:58:34 2024 00:39:23.690 read: IOPS=549, BW=2198KiB/s (2251kB/s)(21.6MiB/10075msec) 00:39:23.690 slat (nsec): min=4372, max=96592, avg=32359.84, stdev=16212.12 00:39:23.690 clat (msec): min=22, max=116, avg=28.80, stdev= 4.85 00:39:23.690 lat (msec): min=22, max=116, avg=28.83, stdev= 4.85 00:39:23.690 clat percentiles (msec): 00:39:23.690 | 1.00th=[ 29], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:39:23.690 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:39:23.690 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:39:23.690 | 99.00th=[ 30], 99.50th=[ 50], 99.90th=[ 117], 99.95th=[ 117], 00:39:23.690 | 99.99th=[ 117] 00:39:23.690 bw ( KiB/s): min= 2048, max= 2304, per=4.19%, avg=2216.42, stdev=74.55, samples=19 00:39:23.690 iops : min= 512, max= 576, avg=554.11, stdev=18.64, samples=19 00:39:23.690 lat (msec) : 50=99.71%, 250=0.29% 00:39:23.690 cpu : usr=98.54%, sys=1.09%, ctx=12, majf=0, minf=9 00:39:23.690 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:23.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.690 issued rwts: total=5536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.690 filename1: (groupid=0, jobs=1): err= 0: pid=1854670: Wed Nov 20 14:58:34 2024 00:39:23.690 read: IOPS=549, BW=2197KiB/s (2250kB/s)(21.6MiB/10078msec) 00:39:23.690 slat (nsec): min=4484, max=94847, avg=32787.69, stdev=16452.71 00:39:23.690 clat (msec): min=27, max=118, avg=28.80, stdev= 4.88 00:39:23.690 lat (msec): min=27, max=118, avg=28.83, stdev= 4.88 00:39:23.690 clat percentiles (msec): 00:39:23.690 | 1.00th=[ 28], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:39:23.690 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:39:23.690 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:39:23.690 | 99.00th=[ 30], 99.50th=[ 51], 99.90th=[ 117], 99.95th=[ 117], 00:39:23.690 | 99.99th=[ 120] 00:39:23.690 bw ( KiB/s): min= 2048, max= 2304, per=4.19%, avg=2216.42, stdev=74.55, samples=19 00:39:23.690 iops : min= 512, max= 576, avg=554.11, stdev=18.64, samples=19 00:39:23.690 lat (msec) : 50=99.42%, 100=0.29%, 250=0.29% 00:39:23.690 cpu : usr=98.65%, sys=0.98%, ctx=14, majf=0, minf=9 00:39:23.690 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:23.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.690 issued rwts: total=5536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.690 filename2: (groupid=0, jobs=1): err= 0: pid=1854671: Wed Nov 20 14:58:34 2024 00:39:23.690 read: IOPS=549, BW=2197KiB/s (2249kB/s)(21.6MiB/10081msec) 00:39:23.690 slat (nsec): min=4545, max=94448, avg=33465.41, stdev=17095.99 00:39:23.690 clat (msec): min=20, max=118, avg=28.82, stdev= 5.00 00:39:23.690 lat (msec): min=20, max=118, avg=28.85, stdev= 5.00 00:39:23.690 clat percentiles (msec): 00:39:23.690 | 1.00th=[ 28], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:39:23.690 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:39:23.690 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:39:23.690 | 99.00th=[ 36], 99.50th=[ 53], 99.90th=[ 117], 99.95th=[ 117], 00:39:23.690 | 99.99th=[ 120] 00:39:23.690 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2208.00, stdev=81.75, samples=20 00:39:23.690 iops : min= 512, max= 576, avg=552.00, stdev=20.44, samples=20 00:39:23.690 lat (msec) : 50=99.42%, 100=0.29%, 250=0.29% 00:39:23.690 cpu : usr=98.68%, sys=0.94%, ctx=14, majf=0, minf=9 00:39:23.690 IO depths : 1=5.8%, 2=12.0%, 4=24.8%, 8=50.8%, 16=6.7%, 32=0.0%, >=64=0.0% 00:39:23.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.690 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.690 issued rwts: total=5536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.690 filename2: (groupid=0, jobs=1): err= 0: pid=1854672: Wed Nov 20 14:58:34 2024 00:39:23.690 read: IOPS=555, BW=2221KiB/s (2275kB/s)(21.9MiB/10084msec) 00:39:23.690 slat (nsec): min=7111, max=53164, avg=19557.12, stdev=5501.73 00:39:23.690 clat (usec): min=8270, max=96741, avg=28643.44, stdev=4039.97 00:39:23.690 lat (usec): min=8296, max=96754, avg=28663.00, stdev=4039.89 00:39:23.690 clat percentiles (usec): 00:39:23.690 | 1.00th=[15139], 5.00th=[28443], 10.00th=[28443], 20.00th=[28443], 00:39:23.690 | 30.00th=[28443], 40.00th=[28705], 50.00th=[28705], 60.00th=[28705], 00:39:23.690 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:39:23.690 | 99.00th=[29230], 99.50th=[29230], 99.90th=[96994], 99.95th=[96994], 00:39:23.690 | 99.99th=[96994] 00:39:23.690 bw ( KiB/s): min= 2176, max= 2432, per=4.22%, avg=2233.60, stdev=77.42, samples=20 00:39:23.690 iops : min= 544, max= 608, avg=558.40, stdev=19.35, samples=20 00:39:23.690 lat (msec) : 10=0.25%, 20=0.89%, 50=98.57%, 100=0.29% 00:39:23.690 cpu : usr=98.40%, sys=1.23%, ctx=12, majf=0, minf=9 00:39:23.690 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:23.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.690 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.690 filename2: (groupid=0, jobs=1): err= 0: pid=1854673: Wed Nov 20 14:58:34 2024 00:39:23.690 read: IOPS=549, BW=2198KiB/s (2251kB/s)(21.6MiB/10074msec) 00:39:23.690 slat (nsec): min=4894, max=93656, avg=32078.57, stdev=16150.03 00:39:23.690 clat (msec): min=27, max=116, avg=28.80, stdev= 4.84 00:39:23.690 lat (msec): min=27, max=116, avg=28.83, stdev= 4.84 00:39:23.690 clat percentiles (msec): 00:39:23.690 | 1.00th=[ 29], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:39:23.690 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:39:23.690 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:39:23.690 | 99.00th=[ 30], 99.50th=[ 49], 99.90th=[ 117], 99.95th=[ 117], 00:39:23.690 | 99.99th=[ 117] 00:39:23.690 bw ( KiB/s): min= 2048, max= 2304, per=4.19%, avg=2216.42, stdev=74.55, samples=19 00:39:23.690 iops : min= 512, max= 576, avg=554.11, stdev=18.64, samples=19 00:39:23.690 lat (msec) : 50=99.71%, 250=0.29% 00:39:23.690 cpu : usr=98.56%, sys=1.07%, ctx=15, majf=0, minf=9 00:39:23.690 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:23.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.690 issued rwts: total=5536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.690 filename2: (groupid=0, jobs=1): err= 0: pid=1854674: Wed Nov 20 14:58:34 2024 00:39:23.691 read: IOPS=555, BW=2221KiB/s (2275kB/s)(21.9MiB/10084msec) 00:39:23.691 slat (nsec): min=7531, max=48127, avg=18084.73, stdev=5959.21 00:39:23.691 clat (usec): min=8168, max=96571, avg=28661.52, stdev=4046.06 00:39:23.691 lat (usec): min=8194, max=96585, avg=28679.60, stdev=4045.73 00:39:23.691 clat percentiles (usec): 00:39:23.691 | 1.00th=[15139], 5.00th=[28443], 10.00th=[28443], 20.00th=[28443], 00:39:23.691 | 30.00th=[28443], 40.00th=[28705], 50.00th=[28705], 60.00th=[28705], 00:39:23.691 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:39:23.691 | 99.00th=[29230], 99.50th=[29492], 99.90th=[96994], 99.95th=[96994], 00:39:23.691 | 99.99th=[96994] 00:39:23.691 bw ( KiB/s): min= 2176, max= 2432, per=4.22%, avg=2233.60, stdev=77.42, samples=20 00:39:23.691 iops : min= 544, max= 608, avg=558.40, stdev=19.35, samples=20 00:39:23.691 lat (msec) : 10=0.29%, 20=0.86%, 50=98.57%, 100=0.29% 00:39:23.691 cpu : usr=98.58%, sys=1.05%, ctx=11, majf=0, minf=9 00:39:23.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:23.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.691 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.691 filename2: (groupid=0, jobs=1): err= 0: pid=1854675: Wed Nov 20 14:58:34 2024 00:39:23.691 read: IOPS=550, BW=2202KiB/s (2254kB/s)(21.7MiB/10087msec) 00:39:23.691 slat (nsec): min=6219, max=94375, avg=33688.63, stdev=16879.23 00:39:23.691 clat (msec): min=27, max=116, avg=28.76, stdev= 4.70 00:39:23.691 lat (msec): min=27, max=116, avg=28.80, stdev= 4.70 00:39:23.691 clat percentiles (msec): 00:39:23.691 | 1.00th=[ 28], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:39:23.691 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:39:23.691 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:39:23.691 | 99.00th=[ 30], 99.50th=[ 34], 99.90th=[ 116], 99.95th=[ 116], 00:39:23.691 | 99.99th=[ 116] 00:39:23.691 bw ( KiB/s): min= 2137, max= 2304, per=4.18%, avg=2212.45, stdev=62.09, samples=20 00:39:23.691 iops : min= 534, max= 576, avg=553.10, stdev=15.54, samples=20 00:39:23.691 lat (msec) : 50=99.71%, 250=0.29% 00:39:23.691 cpu : usr=98.57%, sys=1.05%, ctx=13, majf=0, minf=9 00:39:23.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:23.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.691 issued rwts: total=5552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.691 filename2: (groupid=0, jobs=1): err= 0: pid=1854676: Wed Nov 20 14:58:34 2024 00:39:23.691 read: IOPS=552, BW=2210KiB/s (2263kB/s)(21.8MiB/10107msec) 00:39:23.691 slat (nsec): min=8126, max=81561, avg=24616.64, stdev=12696.68 00:39:23.691 clat (msec): min=13, max=110, avg=28.73, stdev= 4.50 00:39:23.691 lat (msec): min=13, max=110, avg=28.75, stdev= 4.50 00:39:23.691 clat percentiles (msec): 00:39:23.691 | 1.00th=[ 29], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:39:23.691 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:39:23.691 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:39:23.691 | 99.00th=[ 30], 99.50th=[ 30], 99.90th=[ 111], 99.95th=[ 111], 00:39:23.691 | 99.99th=[ 111] 00:39:23.691 bw ( KiB/s): min= 2068, max= 2304, per=4.20%, avg=2221.80, stdev=72.83, samples=20 00:39:23.691 iops : min= 517, max= 576, avg=555.45, stdev=18.21, samples=20 00:39:23.691 lat (msec) : 20=0.57%, 50=99.14%, 250=0.29% 00:39:23.691 cpu : usr=98.44%, sys=1.20%, ctx=13, majf=0, minf=9 00:39:23.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:23.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.691 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.691 filename2: (groupid=0, jobs=1): err= 0: pid=1854677: Wed Nov 20 14:58:34 2024 00:39:23.691 read: IOPS=559, BW=2239KiB/s (2293kB/s)(21.9MiB/10005msec) 00:39:23.691 slat (nsec): min=7163, max=81507, avg=20858.71, stdev=10439.89 00:39:23.691 clat (usec): min=9651, max=29494, avg=28415.28, stdev=1915.85 00:39:23.691 lat (usec): min=9662, max=29509, avg=28436.14, stdev=1915.47 00:39:23.691 clat percentiles (usec): 00:39:23.691 | 1.00th=[14091], 5.00th=[28443], 10.00th=[28443], 20.00th=[28443], 00:39:23.691 | 30.00th=[28443], 40.00th=[28705], 50.00th=[28705], 60.00th=[28705], 00:39:23.691 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:39:23.691 | 99.00th=[29230], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:39:23.691 | 99.99th=[29492] 00:39:23.691 bw ( KiB/s): min= 2176, max= 2560, per=4.23%, avg=2236.63, stdev=98.86, samples=19 00:39:23.691 iops : min= 544, max= 640, avg=559.16, stdev=24.71, samples=19 00:39:23.691 lat (msec) : 10=0.32%, 20=1.11%, 50=98.57% 00:39:23.691 cpu : usr=98.63%, sys=0.97%, ctx=15, majf=0, minf=9 00:39:23.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:23.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.691 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.691 filename2: (groupid=0, jobs=1): err= 0: pid=1854678: Wed Nov 20 14:58:34 2024 00:39:23.691 read: IOPS=549, BW=2197KiB/s (2250kB/s)(21.6MiB/10077msec) 00:39:23.691 slat (nsec): min=3498, max=72143, avg=21860.66, stdev=14242.33 00:39:23.691 clat (msec): min=20, max=116, avg=28.95, stdev= 5.01 00:39:23.691 lat (msec): min=20, max=116, avg=28.97, stdev= 5.01 00:39:23.691 clat percentiles (msec): 00:39:23.691 | 1.00th=[ 22], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:39:23.691 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:39:23.691 | 70.00th=[ 29], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:39:23.691 | 99.00th=[ 37], 99.50th=[ 53], 99.90th=[ 116], 99.95th=[ 116], 00:39:23.691 | 99.99th=[ 116] 00:39:23.691 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2208.20, stdev=81.34, samples=20 00:39:23.691 iops : min= 512, max= 576, avg=552.05, stdev=20.34, samples=20 00:39:23.691 lat (msec) : 50=99.42%, 100=0.29%, 250=0.29% 00:39:23.691 cpu : usr=98.53%, sys=1.11%, ctx=25, majf=0, minf=9 00:39:23.691 IO depths : 1=5.4%, 2=11.6%, 4=24.9%, 8=51.0%, 16=7.1%, 32=0.0%, >=64=0.0% 00:39:23.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.691 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.691 issued rwts: total=5536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:23.691 00:39:23.691 Run status group 0 (all jobs): 00:39:23.691 READ: bw=51.7MiB/s (54.2MB/s), 2197KiB/s-2301KiB/s (2249kB/s-2356kB/s), io=522MiB (548MB), run=10005-10107msec 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:39:23.691 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.692 bdev_null0 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.692 [2024-11-20 14:58:34.304128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.692 bdev_null1 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:23.692 { 00:39:23.692 "params": { 00:39:23.692 "name": "Nvme$subsystem", 00:39:23.692 "trtype": "$TEST_TRANSPORT", 00:39:23.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:23.692 "adrfam": "ipv4", 00:39:23.692 "trsvcid": "$NVMF_PORT", 00:39:23.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:23.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:23.692 "hdgst": ${hdgst:-false}, 00:39:23.692 "ddgst": ${ddgst:-false} 00:39:23.692 }, 00:39:23.692 "method": "bdev_nvme_attach_controller" 00:39:23.692 } 00:39:23.692 EOF 00:39:23.692 )") 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:23.692 { 00:39:23.692 "params": { 00:39:23.692 "name": "Nvme$subsystem", 00:39:23.692 "trtype": "$TEST_TRANSPORT", 00:39:23.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:23.692 "adrfam": "ipv4", 00:39:23.692 "trsvcid": "$NVMF_PORT", 00:39:23.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:23.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:23.692 "hdgst": ${hdgst:-false}, 00:39:23.692 "ddgst": ${ddgst:-false} 00:39:23.692 }, 00:39:23.692 "method": "bdev_nvme_attach_controller" 00:39:23.692 } 00:39:23.692 EOF 00:39:23.692 )") 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:23.692 "params": { 00:39:23.692 "name": "Nvme0", 00:39:23.692 "trtype": "tcp", 00:39:23.692 "traddr": "10.0.0.2", 00:39:23.692 "adrfam": "ipv4", 00:39:23.692 "trsvcid": "4420", 00:39:23.692 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:23.692 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:23.692 "hdgst": false, 00:39:23.692 "ddgst": false 00:39:23.692 }, 00:39:23.692 "method": "bdev_nvme_attach_controller" 00:39:23.692 },{ 00:39:23.692 "params": { 00:39:23.692 "name": "Nvme1", 00:39:23.692 "trtype": "tcp", 00:39:23.692 "traddr": "10.0.0.2", 00:39:23.692 "adrfam": "ipv4", 00:39:23.692 "trsvcid": "4420", 00:39:23.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:23.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:23.692 "hdgst": false, 00:39:23.692 "ddgst": false 00:39:23.692 }, 00:39:23.692 "method": "bdev_nvme_attach_controller" 00:39:23.692 }' 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:23.692 14:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:23.692 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:23.692 ... 00:39:23.692 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:23.692 ... 00:39:23.692 fio-3.35 00:39:23.692 Starting 4 threads 00:39:28.962 00:39:28.962 filename0: (groupid=0, jobs=1): err= 0: pid=1856593: Wed Nov 20 14:58:40 2024 00:39:28.962 read: IOPS=2753, BW=21.5MiB/s (22.6MB/s)(108MiB/5002msec) 00:39:28.962 slat (usec): min=6, max=177, avg= 8.73, stdev= 3.14 00:39:28.962 clat (usec): min=969, max=5237, avg=2877.92, stdev=403.51 00:39:28.962 lat (usec): min=983, max=5247, avg=2886.65, stdev=403.25 00:39:28.962 clat percentiles (usec): 00:39:28.962 | 1.00th=[ 1860], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2540], 00:39:28.962 | 30.00th=[ 2638], 40.00th=[ 2802], 50.00th=[ 2933], 60.00th=[ 3064], 00:39:28.962 | 70.00th=[ 3097], 80.00th=[ 3130], 90.00th=[ 3261], 95.00th=[ 3458], 00:39:28.962 | 99.00th=[ 4015], 99.50th=[ 4178], 99.90th=[ 4621], 99.95th=[ 4948], 00:39:28.962 | 99.99th=[ 5211] 00:39:28.962 bw ( KiB/s): min=20672, max=24016, per=26.94%, avg=22001.78, stdev=1234.24, samples=9 00:39:28.962 iops : min= 2584, max= 3002, avg=2750.22, stdev=154.28, samples=9 00:39:28.962 lat (usec) : 1000=0.02% 00:39:28.962 lat (msec) : 2=1.68%, 4=97.30%, 10=0.99% 00:39:28.962 cpu : usr=95.84%, sys=3.84%, ctx=9, majf=0, minf=9 00:39:28.962 IO depths : 1=0.3%, 2=7.6%, 4=64.1%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:28.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:28.962 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:28.962 issued rwts: total=13773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:28.962 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:28.962 filename0: (groupid=0, jobs=1): err= 0: pid=1856594: Wed Nov 20 14:58:40 2024 00:39:28.962 read: IOPS=2506, BW=19.6MiB/s (20.5MB/s)(98.0MiB/5003msec) 00:39:28.962 slat (nsec): min=6282, max=35091, avg=9102.00, stdev=2988.63 00:39:28.962 clat (usec): min=839, max=5716, avg=3167.49, stdev=437.82 00:39:28.962 lat (usec): min=853, max=5728, avg=3176.60, stdev=437.65 00:39:28.962 clat percentiles (usec): 00:39:28.962 | 1.00th=[ 2245], 5.00th=[ 2573], 10.00th=[ 2704], 20.00th=[ 2900], 00:39:28.962 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3097], 60.00th=[ 3130], 00:39:28.962 | 70.00th=[ 3195], 80.00th=[ 3392], 90.00th=[ 3720], 95.00th=[ 3884], 00:39:28.962 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 5473], 99.95th=[ 5473], 00:39:28.962 | 99.99th=[ 5604] 00:39:28.962 bw ( KiB/s): min=19312, max=20768, per=24.55%, avg=20052.80, stdev=483.07, samples=10 00:39:28.962 iops : min= 2414, max= 2596, avg=2506.60, stdev=60.38, samples=10 00:39:28.962 lat (usec) : 1000=0.01% 00:39:28.962 lat (msec) : 2=0.19%, 4=95.64%, 10=4.16% 00:39:28.962 cpu : usr=96.22%, sys=3.48%, ctx=7, majf=0, minf=9 00:39:28.962 IO depths : 1=0.1%, 2=2.5%, 4=66.0%, 8=31.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:28.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:28.963 complete : 0=0.0%, 4=95.5%, 8=4.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:28.963 issued rwts: total=12541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:28.963 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:28.963 filename1: (groupid=0, jobs=1): err= 0: pid=1856595: Wed Nov 20 14:58:40 2024 00:39:28.963 read: IOPS=2502, BW=19.6MiB/s (20.5MB/s)(97.8MiB/5002msec) 00:39:28.963 slat (nsec): min=6262, max=45232, avg=8638.48, stdev=2879.90 00:39:28.963 clat (usec): min=1421, max=5569, avg=3170.77, stdev=427.15 00:39:28.963 lat (usec): min=1433, max=5575, avg=3179.41, stdev=426.98 00:39:28.963 clat percentiles (usec): 00:39:28.963 | 1.00th=[ 2180], 5.00th=[ 2540], 10.00th=[ 2737], 20.00th=[ 2966], 00:39:28.963 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3097], 60.00th=[ 3130], 00:39:28.963 | 70.00th=[ 3195], 80.00th=[ 3359], 90.00th=[ 3720], 95.00th=[ 3916], 00:39:28.963 | 99.00th=[ 4686], 99.50th=[ 4948], 99.90th=[ 5407], 99.95th=[ 5473], 00:39:28.963 | 99.99th=[ 5538] 00:39:28.963 bw ( KiB/s): min=19008, max=21440, per=24.51%, avg=20022.40, stdev=725.69, samples=10 00:39:28.963 iops : min= 2376, max= 2680, avg=2502.80, stdev=90.71, samples=10 00:39:28.963 lat (msec) : 2=0.44%, 4=95.41%, 10=4.15% 00:39:28.963 cpu : usr=95.74%, sys=3.94%, ctx=6, majf=0, minf=9 00:39:28.963 IO depths : 1=0.2%, 2=1.9%, 4=71.3%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:28.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:28.963 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:28.963 issued rwts: total=12519,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:28.963 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:28.963 filename1: (groupid=0, jobs=1): err= 0: pid=1856596: Wed Nov 20 14:58:40 2024 00:39:28.963 read: IOPS=2447, BW=19.1MiB/s (20.0MB/s)(95.6MiB/5003msec) 00:39:28.963 slat (nsec): min=6281, max=43752, avg=8537.20, stdev=2769.61 00:39:28.963 clat (usec): min=1018, max=5837, avg=3243.63, stdev=428.75 00:39:28.963 lat (usec): min=1024, max=5858, avg=3252.17, stdev=428.62 00:39:28.963 clat percentiles (usec): 00:39:28.963 | 1.00th=[ 2311], 5.00th=[ 2769], 10.00th=[ 2900], 20.00th=[ 3064], 00:39:28.963 | 30.00th=[ 3097], 40.00th=[ 3097], 50.00th=[ 3130], 60.00th=[ 3163], 00:39:28.963 | 70.00th=[ 3261], 80.00th=[ 3425], 90.00th=[ 3785], 95.00th=[ 4047], 00:39:28.963 | 99.00th=[ 4883], 99.50th=[ 5145], 99.90th=[ 5538], 99.95th=[ 5604], 00:39:28.963 | 99.99th=[ 5735] 00:39:28.963 bw ( KiB/s): min=18560, max=20336, per=23.97%, avg=19579.80, stdev=547.18, samples=10 00:39:28.963 iops : min= 2320, max= 2542, avg=2447.40, stdev=68.47, samples=10 00:39:28.963 lat (msec) : 2=0.19%, 4=94.22%, 10=5.60% 00:39:28.963 cpu : usr=96.42%, sys=3.26%, ctx=8, majf=0, minf=9 00:39:28.963 IO depths : 1=0.1%, 2=1.4%, 4=72.1%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:28.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:28.963 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:28.963 issued rwts: total=12243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:28.963 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:28.963 00:39:28.963 Run status group 0 (all jobs): 00:39:28.963 READ: bw=79.8MiB/s (83.6MB/s), 19.1MiB/s-21.5MiB/s (20.0MB/s-22.6MB/s), io=399MiB (418MB), run=5002-5003msec 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.963 00:39:28.963 real 0m24.260s 00:39:28.963 user 4m53.627s 00:39:28.963 sys 0m5.005s 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:28.963 14:58:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:28.963 ************************************ 00:39:28.963 END TEST fio_dif_rand_params 00:39:28.963 ************************************ 00:39:28.963 14:58:40 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:39:28.963 14:58:40 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:28.963 14:58:40 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:28.963 14:58:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:28.963 ************************************ 00:39:28.963 START TEST fio_dif_digest 00:39:28.963 ************************************ 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:28.963 bdev_null0 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.963 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:28.964 [2024-11-20 14:58:40.653264] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:28.964 { 00:39:28.964 "params": { 00:39:28.964 "name": "Nvme$subsystem", 00:39:28.964 "trtype": "$TEST_TRANSPORT", 00:39:28.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:28.964 "adrfam": "ipv4", 00:39:28.964 "trsvcid": "$NVMF_PORT", 00:39:28.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:28.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:28.964 "hdgst": ${hdgst:-false}, 00:39:28.964 "ddgst": ${ddgst:-false} 00:39:28.964 }, 00:39:28.964 "method": "bdev_nvme_attach_controller" 00:39:28.964 } 00:39:28.964 EOF 00:39:28.964 )") 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:28.964 "params": { 00:39:28.964 "name": "Nvme0", 00:39:28.964 "trtype": "tcp", 00:39:28.964 "traddr": "10.0.0.2", 00:39:28.964 "adrfam": "ipv4", 00:39:28.964 "trsvcid": "4420", 00:39:28.964 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:28.964 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:28.964 "hdgst": true, 00:39:28.964 "ddgst": true 00:39:28.964 }, 00:39:28.964 "method": "bdev_nvme_attach_controller" 00:39:28.964 }' 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:28.964 14:58:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:29.226 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:29.226 ... 00:39:29.226 fio-3.35 00:39:29.226 Starting 3 threads 00:39:41.514 00:39:41.514 filename0: (groupid=0, jobs=1): err= 0: pid=1857685: Wed Nov 20 14:58:51 2024 00:39:41.514 read: IOPS=279, BW=34.9MiB/s (36.6MB/s)(351MiB/10043msec) 00:39:41.514 slat (nsec): min=6595, max=77954, avg=16543.56, stdev=6508.31 00:39:41.514 clat (usec): min=7806, max=49350, avg=10703.79, stdev=1227.66 00:39:41.514 lat (usec): min=7818, max=49376, avg=10720.33, stdev=1228.33 00:39:41.514 clat percentiles (usec): 00:39:41.514 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:39:41.514 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:39:41.514 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:39:41.514 | 99.00th=[12518], 99.50th=[12649], 99.90th=[13304], 99.95th=[45876], 00:39:41.514 | 99.99th=[49546] 00:39:41.514 bw ( KiB/s): min=34816, max=37120, per=35.24%, avg=35905.45, stdev=651.59, samples=20 00:39:41.514 iops : min= 272, max= 290, avg=280.40, stdev= 5.09, samples=20 00:39:41.514 lat (msec) : 10=17.28%, 20=82.64%, 50=0.07% 00:39:41.514 cpu : usr=95.93%, sys=3.75%, ctx=35, majf=0, minf=64 00:39:41.514 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:41.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.514 issued rwts: total=2806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.514 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:41.514 filename0: (groupid=0, jobs=1): err= 0: pid=1857686: Wed Nov 20 14:58:51 2024 00:39:41.514 read: IOPS=262, BW=32.8MiB/s (34.4MB/s)(330MiB/10046msec) 00:39:41.514 slat (nsec): min=6618, max=62012, avg=15331.91, stdev=6864.98 00:39:41.514 clat (usec): min=6554, max=46344, avg=11387.11, stdev=1225.21 00:39:41.514 lat (usec): min=6566, max=46356, avg=11402.44, stdev=1225.05 00:39:41.514 clat percentiles (usec): 00:39:41.514 | 1.00th=[ 9503], 5.00th=[10028], 10.00th=[10421], 20.00th=[10683], 00:39:41.514 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:39:41.514 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:39:41.514 | 99.00th=[13304], 99.50th=[13435], 99.90th=[14091], 99.95th=[45351], 00:39:41.514 | 99.99th=[46400] 00:39:41.514 bw ( KiB/s): min=32512, max=34560, per=33.13%, avg=33753.60, stdev=594.75, samples=20 00:39:41.514 iops : min= 254, max= 270, avg=263.70, stdev= 4.65, samples=20 00:39:41.514 lat (msec) : 10=3.75%, 20=96.17%, 50=0.08% 00:39:41.514 cpu : usr=96.50%, sys=3.17%, ctx=37, majf=0, minf=109 00:39:41.514 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:41.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.514 issued rwts: total=2639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.514 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:41.514 filename0: (groupid=0, jobs=1): err= 0: pid=1857687: Wed Nov 20 14:58:51 2024 00:39:41.514 read: IOPS=253, BW=31.7MiB/s (33.3MB/s)(319MiB/10046msec) 00:39:41.514 slat (nsec): min=6617, max=51804, avg=15003.93, stdev=6449.86 00:39:41.514 clat (usec): min=9129, max=49254, avg=11780.46, stdev=1293.40 00:39:41.514 lat (usec): min=9140, max=49264, avg=11795.46, stdev=1293.14 00:39:41.514 clat percentiles (usec): 00:39:41.514 | 1.00th=[10028], 5.00th=[10552], 10.00th=[10814], 20.00th=[11076], 00:39:41.514 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:39:41.514 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12780], 95.00th=[13042], 00:39:41.514 | 99.00th=[13698], 99.50th=[13960], 99.90th=[15139], 99.95th=[47973], 00:39:41.514 | 99.99th=[49021] 00:39:41.514 bw ( KiB/s): min=32000, max=33792, per=32.02%, avg=32627.20, stdev=508.45, samples=20 00:39:41.514 iops : min= 250, max= 264, avg=254.90, stdev= 3.97, samples=20 00:39:41.514 lat (msec) : 10=1.14%, 20=98.78%, 50=0.08% 00:39:41.514 cpu : usr=96.52%, sys=3.18%, ctx=16, majf=0, minf=54 00:39:41.514 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:41.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.514 issued rwts: total=2551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.514 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:41.514 00:39:41.514 Run status group 0 (all jobs): 00:39:41.514 READ: bw=99.5MiB/s (104MB/s), 31.7MiB/s-34.9MiB/s (33.3MB/s-36.6MB/s), io=1000MiB (1048MB), run=10043-10046msec 00:39:41.514 14:58:51 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:39:41.514 14:58:51 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:39:41.514 14:58:51 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:39:41.514 14:58:51 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:41.514 14:58:51 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:39:41.514 14:58:51 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:41.514 14:58:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.514 14:58:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:41.514 14:58:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.514 14:58:51 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:41.514 14:58:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.514 14:58:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:41.514 14:58:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.514 00:39:41.514 real 0m11.154s 00:39:41.514 user 0m35.316s 00:39:41.514 sys 0m1.332s 00:39:41.514 14:58:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:41.514 14:58:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:41.514 ************************************ 00:39:41.514 END TEST fio_dif_digest 00:39:41.514 ************************************ 00:39:41.514 14:58:51 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:39:41.514 14:58:51 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:39:41.514 14:58:51 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:41.514 14:58:51 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:39:41.514 14:58:51 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:41.514 14:58:51 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:39:41.514 14:58:51 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:41.514 14:58:51 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:41.514 rmmod nvme_tcp 00:39:41.514 rmmod nvme_fabrics 00:39:41.514 rmmod nvme_keyring 00:39:41.514 14:58:51 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:41.514 14:58:51 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:39:41.514 14:58:51 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:39:41.514 14:58:51 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1849398 ']' 00:39:41.514 14:58:51 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1849398 00:39:41.514 14:58:51 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1849398 ']' 00:39:41.514 14:58:51 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1849398 00:39:41.514 14:58:51 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:39:41.514 14:58:51 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:41.514 14:58:51 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1849398 00:39:41.514 14:58:51 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:41.514 14:58:51 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:41.514 14:58:51 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1849398' 00:39:41.514 killing process with pid 1849398 00:39:41.514 14:58:51 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1849398 00:39:41.514 14:58:51 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1849398 00:39:41.514 14:58:52 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:41.514 14:58:52 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:42.894 Waiting for block devices as requested 00:39:42.894 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:39:43.153 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:43.153 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:43.153 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:43.413 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:43.413 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:43.413 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:43.672 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:43.672 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:43.672 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:39:43.672 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:39:43.931 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:39:43.931 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:39:43.931 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:39:44.190 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:39:44.190 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:39:44.190 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:39:44.190 14:58:56 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:44.190 14:58:56 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:44.190 14:58:56 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:39:44.190 14:58:56 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:39:44.190 14:58:56 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:44.190 14:58:56 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:39:44.449 14:58:56 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:44.449 14:58:56 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:44.449 14:58:56 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:44.449 14:58:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:44.449 14:58:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:46.354 14:58:58 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:46.354 00:39:46.354 real 1m14.078s 00:39:46.354 user 7m10.703s 00:39:46.354 sys 0m19.948s 00:39:46.354 14:58:58 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:46.354 14:58:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:46.354 ************************************ 00:39:46.354 END TEST nvmf_dif 00:39:46.354 ************************************ 00:39:46.354 14:58:58 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:46.354 14:58:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:46.354 14:58:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:46.354 14:58:58 -- common/autotest_common.sh@10 -- # set +x 00:39:46.354 ************************************ 00:39:46.354 START TEST nvmf_abort_qd_sizes 00:39:46.354 ************************************ 00:39:46.354 14:58:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:46.613 * Looking for test storage... 00:39:46.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:46.613 14:58:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:46.613 14:58:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:39:46.613 14:58:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:46.613 14:58:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:46.613 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:46.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:46.614 --rc genhtml_branch_coverage=1 00:39:46.614 --rc genhtml_function_coverage=1 00:39:46.614 --rc genhtml_legend=1 00:39:46.614 --rc geninfo_all_blocks=1 00:39:46.614 --rc geninfo_unexecuted_blocks=1 00:39:46.614 00:39:46.614 ' 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:46.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:46.614 --rc genhtml_branch_coverage=1 00:39:46.614 --rc genhtml_function_coverage=1 00:39:46.614 --rc genhtml_legend=1 00:39:46.614 --rc geninfo_all_blocks=1 00:39:46.614 --rc geninfo_unexecuted_blocks=1 00:39:46.614 00:39:46.614 ' 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:46.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:46.614 --rc genhtml_branch_coverage=1 00:39:46.614 --rc genhtml_function_coverage=1 00:39:46.614 --rc genhtml_legend=1 00:39:46.614 --rc geninfo_all_blocks=1 00:39:46.614 --rc geninfo_unexecuted_blocks=1 00:39:46.614 00:39:46.614 ' 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:46.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:46.614 --rc genhtml_branch_coverage=1 00:39:46.614 --rc genhtml_function_coverage=1 00:39:46.614 --rc genhtml_legend=1 00:39:46.614 --rc geninfo_all_blocks=1 00:39:46.614 --rc geninfo_unexecuted_blocks=1 00:39:46.614 00:39:46.614 ' 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:46.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:39:46.614 14:58:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:39:53.178 Found 0000:86:00.0 (0x8086 - 0x159b) 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:53.178 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:39:53.179 Found 0000:86:00.1 (0x8086 - 0x159b) 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:39:53.179 Found net devices under 0000:86:00.0: cvl_0_0 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:39:53.179 Found net devices under 0000:86:00.1: cvl_0_1 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:53.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:53.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:39:53.179 00:39:53.179 --- 10.0.0.2 ping statistics --- 00:39:53.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:53.179 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:53.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:53.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:39:53.179 00:39:53.179 --- 10.0.0.1 ping statistics --- 00:39:53.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:53.179 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:39:53.179 14:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:55.713 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:55.713 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:55.713 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:55.713 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:55.713 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:55.713 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:55.713 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:55.713 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:55.713 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:55.713 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:55.713 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:55.713 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:55.713 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:55.713 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:55.713 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:55.713 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:56.280 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:39:56.281 14:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:56.281 14:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:56.281 14:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:56.281 14:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:56.281 14:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:56.281 14:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:56.281 14:59:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:56.281 14:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:56.281 14:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:56.281 14:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:56.281 14:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1865566 00:39:56.281 14:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:56.281 14:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1865566 00:39:56.281 14:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1865566 ']' 00:39:56.281 14:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:56.281 14:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:56.281 14:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:56.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:56.281 14:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:56.281 14:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:56.281 [2024-11-20 14:59:08.229646] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:39:56.281 [2024-11-20 14:59:08.229689] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:56.539 [2024-11-20 14:59:08.310211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:56.539 [2024-11-20 14:59:08.354186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:56.539 [2024-11-20 14:59:08.354225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:56.539 [2024-11-20 14:59:08.354232] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:56.539 [2024-11-20 14:59:08.354238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:56.539 [2024-11-20 14:59:08.354244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:56.539 [2024-11-20 14:59:08.355738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:56.539 [2024-11-20 14:59:08.355850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:56.539 [2024-11-20 14:59:08.355977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:56.539 [2024-11-20 14:59:08.355979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:57.475 14:59:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:57.475 ************************************ 00:39:57.475 START TEST spdk_target_abort 00:39:57.475 ************************************ 00:39:57.475 14:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:39:57.475 14:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:57.475 14:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:39:57.475 14:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.475 14:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:00.006 spdk_targetn1 00:40:00.006 14:59:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.006 14:59:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:00.006 14:59:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.006 14:59:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:00.006 [2024-11-20 14:59:11.962110] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:00.265 14:59:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.265 14:59:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:40:00.265 14:59:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.265 14:59:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:00.265 14:59:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.265 14:59:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:40:00.265 14:59:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.265 14:59:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:00.265 [2024-11-20 14:59:12.006537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:00.265 14:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:03.550 Initializing NVMe Controllers 00:40:03.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:03.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:03.550 Initialization complete. Launching workers. 00:40:03.550 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15202, failed: 0 00:40:03.550 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1341, failed to submit 13861 00:40:03.550 success 745, unsuccessful 596, failed 0 00:40:03.550 14:59:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:03.550 14:59:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:06.837 Initializing NVMe Controllers 00:40:06.837 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:06.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:06.837 Initialization complete. Launching workers. 00:40:06.837 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8526, failed: 0 00:40:06.837 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1259, failed to submit 7267 00:40:06.837 success 324, unsuccessful 935, failed 0 00:40:06.837 14:59:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:06.837 14:59:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:10.119 Initializing NVMe Controllers 00:40:10.119 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:10.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:10.119 Initialization complete. Launching workers. 00:40:10.119 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37688, failed: 0 00:40:10.119 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2886, failed to submit 34802 00:40:10.119 success 587, unsuccessful 2299, failed 0 00:40:10.119 14:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:40:10.119 14:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.119 14:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:10.119 14:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.119 14:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:40:10.119 14:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.119 14:59:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:11.495 14:59:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.495 14:59:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1865566 00:40:11.495 14:59:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1865566 ']' 00:40:11.495 14:59:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1865566 00:40:11.495 14:59:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:40:11.495 14:59:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:11.495 14:59:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1865566 00:40:11.495 14:59:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:11.495 14:59:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:11.495 14:59:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1865566' 00:40:11.495 killing process with pid 1865566 00:40:11.495 14:59:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1865566 00:40:11.495 14:59:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1865566 00:40:11.495 00:40:11.495 real 0m14.127s 00:40:11.495 user 0m56.195s 00:40:11.495 sys 0m2.608s 00:40:11.495 14:59:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:11.495 14:59:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:11.495 ************************************ 00:40:11.495 END TEST spdk_target_abort 00:40:11.495 ************************************ 00:40:11.495 14:59:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:40:11.495 14:59:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:11.495 14:59:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:11.495 14:59:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:11.495 ************************************ 00:40:11.495 START TEST kernel_target_abort 00:40:11.495 ************************************ 00:40:11.495 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:40:11.496 14:59:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:14.032 Waiting for block devices as requested 00:40:14.292 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:40:14.292 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:40:14.292 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:40:14.551 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:40:14.551 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:40:14.551 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:40:14.810 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:40:14.810 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:40:14.810 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:40:15.069 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:40:15.069 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:40:15.069 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:40:15.069 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:40:15.327 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:40:15.327 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:40:15.327 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:40:15.586 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:40:15.586 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:40:15.586 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:15.586 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:40:15.586 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:40:15.586 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:15.586 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:40:15.586 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:40:15.586 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:40:15.586 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:40:15.586 No valid GPT data, bailing 00:40:15.586 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:15.586 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:40:15.586 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:40:15.586 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:40:15.587 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:40:15.587 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:15.587 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:15.587 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:15.587 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:40:15.587 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:40:15.587 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:40:15.587 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:40:15.587 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:40:15.587 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:40:15.587 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:40:15.587 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:40:15.587 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:15.587 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:40:15.846 00:40:15.846 Discovery Log Number of Records 2, Generation counter 2 00:40:15.846 =====Discovery Log Entry 0====== 00:40:15.846 trtype: tcp 00:40:15.846 adrfam: ipv4 00:40:15.846 subtype: current discovery subsystem 00:40:15.846 treq: not specified, sq flow control disable supported 00:40:15.846 portid: 1 00:40:15.846 trsvcid: 4420 00:40:15.846 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:15.846 traddr: 10.0.0.1 00:40:15.846 eflags: none 00:40:15.846 sectype: none 00:40:15.846 =====Discovery Log Entry 1====== 00:40:15.846 trtype: tcp 00:40:15.846 adrfam: ipv4 00:40:15.846 subtype: nvme subsystem 00:40:15.846 treq: not specified, sq flow control disable supported 00:40:15.846 portid: 1 00:40:15.846 trsvcid: 4420 00:40:15.846 subnqn: nqn.2016-06.io.spdk:testnqn 00:40:15.846 traddr: 10.0.0.1 00:40:15.846 eflags: none 00:40:15.846 sectype: none 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:15.846 14:59:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:19.132 Initializing NVMe Controllers 00:40:19.132 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:19.132 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:19.132 Initialization complete. Launching workers. 00:40:19.132 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92383, failed: 0 00:40:19.132 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 92383, failed to submit 0 00:40:19.132 success 0, unsuccessful 92383, failed 0 00:40:19.132 14:59:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:19.132 14:59:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:22.420 Initializing NVMe Controllers 00:40:22.420 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:22.420 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:22.420 Initialization complete. Launching workers. 00:40:22.420 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146079, failed: 0 00:40:22.420 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36566, failed to submit 109513 00:40:22.420 success 0, unsuccessful 36566, failed 0 00:40:22.420 14:59:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:22.420 14:59:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:25.708 Initializing NVMe Controllers 00:40:25.708 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:25.708 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:25.708 Initialization complete. Launching workers. 00:40:25.708 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 137386, failed: 0 00:40:25.708 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34414, failed to submit 102972 00:40:25.708 success 0, unsuccessful 34414, failed 0 00:40:25.708 14:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:40:25.708 14:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:40:25.708 14:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:40:25.708 14:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:25.708 14:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:25.708 14:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:25.708 14:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:25.708 14:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:40:25.708 14:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:40:25.708 14:59:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:28.243 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:40:28.243 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:40:28.243 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:40:28.243 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:40:28.243 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:40:28.243 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:40:28.243 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:40:28.243 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:40:28.243 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:40:28.243 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:40:28.243 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:40:28.243 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:40:28.243 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:40:28.243 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:40:28.243 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:40:28.243 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:40:28.812 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:40:29.071 00:40:29.071 real 0m17.524s 00:40:29.071 user 0m9.120s 00:40:29.071 sys 0m5.092s 00:40:29.071 14:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:29.071 14:59:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:29.071 ************************************ 00:40:29.071 END TEST kernel_target_abort 00:40:29.071 ************************************ 00:40:29.071 14:59:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:40:29.071 14:59:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:40:29.071 14:59:40 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:29.071 14:59:40 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:40:29.071 14:59:40 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:29.071 14:59:40 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:40:29.071 14:59:40 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:29.071 14:59:40 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:29.071 rmmod nvme_tcp 00:40:29.071 rmmod nvme_fabrics 00:40:29.071 rmmod nvme_keyring 00:40:29.071 14:59:40 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:29.071 14:59:40 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:40:29.071 14:59:40 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:40:29.071 14:59:40 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1865566 ']' 00:40:29.071 14:59:40 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1865566 00:40:29.071 14:59:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1865566 ']' 00:40:29.071 14:59:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1865566 00:40:29.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1865566) - No such process 00:40:29.071 14:59:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1865566 is not found' 00:40:29.071 Process with pid 1865566 is not found 00:40:29.071 14:59:40 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:40:29.071 14:59:40 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:32.362 Waiting for block devices as requested 00:40:32.362 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:40:32.362 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:40:32.362 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:40:32.362 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:40:32.362 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:40:32.362 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:40:32.362 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:40:32.362 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:40:32.362 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:40:32.622 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:40:32.622 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:40:32.622 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:40:32.881 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:40:32.881 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:40:32.881 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:40:33.141 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:40:33.141 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:40:33.141 14:59:45 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:33.141 14:59:45 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:33.141 14:59:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:40:33.141 14:59:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:40:33.141 14:59:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:33.141 14:59:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:40:33.141 14:59:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:33.141 14:59:45 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:33.141 14:59:45 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:33.141 14:59:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:33.141 14:59:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:35.754 14:59:47 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:35.754 00:40:35.754 real 0m48.863s 00:40:35.754 user 1m9.840s 00:40:35.754 sys 0m16.399s 00:40:35.754 14:59:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:35.754 14:59:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:35.754 ************************************ 00:40:35.754 END TEST nvmf_abort_qd_sizes 00:40:35.754 ************************************ 00:40:35.754 14:59:47 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:35.754 14:59:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:35.754 14:59:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:35.754 14:59:47 -- common/autotest_common.sh@10 -- # set +x 00:40:35.754 ************************************ 00:40:35.754 START TEST keyring_file 00:40:35.754 ************************************ 00:40:35.754 14:59:47 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:35.754 * Looking for test storage... 00:40:35.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:35.754 14:59:47 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:35.754 14:59:47 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:40:35.754 14:59:47 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:35.754 14:59:47 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@345 -- # : 1 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@353 -- # local d=1 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@355 -- # echo 1 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@353 -- # local d=2 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@355 -- # echo 2 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@368 -- # return 0 00:40:35.754 14:59:47 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:35.754 14:59:47 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:35.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:35.754 --rc genhtml_branch_coverage=1 00:40:35.754 --rc genhtml_function_coverage=1 00:40:35.754 --rc genhtml_legend=1 00:40:35.754 --rc geninfo_all_blocks=1 00:40:35.754 --rc geninfo_unexecuted_blocks=1 00:40:35.754 00:40:35.754 ' 00:40:35.754 14:59:47 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:35.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:35.754 --rc genhtml_branch_coverage=1 00:40:35.754 --rc genhtml_function_coverage=1 00:40:35.754 --rc genhtml_legend=1 00:40:35.754 --rc geninfo_all_blocks=1 00:40:35.754 --rc geninfo_unexecuted_blocks=1 00:40:35.754 00:40:35.754 ' 00:40:35.754 14:59:47 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:35.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:35.754 --rc genhtml_branch_coverage=1 00:40:35.754 --rc genhtml_function_coverage=1 00:40:35.754 --rc genhtml_legend=1 00:40:35.754 --rc geninfo_all_blocks=1 00:40:35.754 --rc geninfo_unexecuted_blocks=1 00:40:35.754 00:40:35.754 ' 00:40:35.754 14:59:47 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:35.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:35.754 --rc genhtml_branch_coverage=1 00:40:35.754 --rc genhtml_function_coverage=1 00:40:35.754 --rc genhtml_legend=1 00:40:35.754 --rc geninfo_all_blocks=1 00:40:35.754 --rc geninfo_unexecuted_blocks=1 00:40:35.754 00:40:35.754 ' 00:40:35.754 14:59:47 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:35.754 14:59:47 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:35.754 14:59:47 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:40:35.754 14:59:47 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:35.754 14:59:47 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:35.754 14:59:47 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:35.754 14:59:47 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:35.754 14:59:47 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:35.754 14:59:47 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:35.754 14:59:47 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:35.754 14:59:47 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:35.754 14:59:47 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:35.754 14:59:47 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:35.754 14:59:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:40:35.754 14:59:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:40:35.754 14:59:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:35.754 14:59:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:35.754 14:59:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:35.754 14:59:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:35.754 14:59:47 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:35.754 14:59:47 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:35.754 14:59:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.754 14:59:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.754 14:59:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.755 14:59:47 keyring_file -- paths/export.sh@5 -- # export PATH 00:40:35.755 14:59:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@51 -- # : 0 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:35.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:35.755 14:59:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:35.755 14:59:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:35.755 14:59:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:35.755 14:59:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:40:35.755 14:59:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:40:35.755 14:59:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:40:35.755 14:59:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:35.755 14:59:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:35.755 14:59:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:35.755 14:59:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:35.755 14:59:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:35.755 14:59:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:35.755 14:59:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.WUiAGqWviV 00:40:35.755 14:59:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:35.755 14:59:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WUiAGqWviV 00:40:35.755 14:59:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.WUiAGqWviV 00:40:35.755 14:59:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.WUiAGqWviV 00:40:35.755 14:59:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:40:35.755 14:59:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:35.755 14:59:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:40:35.755 14:59:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:35.755 14:59:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:35.755 14:59:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:35.755 14:59:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.m02atIZ6DM 00:40:35.755 14:59:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:35.755 14:59:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:35.755 14:59:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.m02atIZ6DM 00:40:35.755 14:59:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.m02atIZ6DM 00:40:35.755 14:59:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.m02atIZ6DM 00:40:35.755 14:59:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=1874236 00:40:35.755 14:59:47 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:35.755 14:59:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1874236 00:40:35.755 14:59:47 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1874236 ']' 00:40:35.755 14:59:47 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:35.755 14:59:47 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:35.755 14:59:47 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:35.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:35.755 14:59:47 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:35.755 14:59:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:35.755 [2024-11-20 14:59:47.530255] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:40:35.755 [2024-11-20 14:59:47.530306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1874236 ] 00:40:35.755 [2024-11-20 14:59:47.606458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:35.755 [2024-11-20 14:59:47.650717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:36.691 14:59:48 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:36.691 [2024-11-20 14:59:48.359877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:36.691 null0 00:40:36.691 [2024-11-20 14:59:48.391933] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:36.691 [2024-11-20 14:59:48.392278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.691 14:59:48 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:36.691 [2024-11-20 14:59:48.420000] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:40:36.691 request: 00:40:36.691 { 00:40:36.691 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:40:36.691 "secure_channel": false, 00:40:36.691 "listen_address": { 00:40:36.691 "trtype": "tcp", 00:40:36.691 "traddr": "127.0.0.1", 00:40:36.691 "trsvcid": "4420" 00:40:36.691 }, 00:40:36.691 "method": "nvmf_subsystem_add_listener", 00:40:36.691 "req_id": 1 00:40:36.691 } 00:40:36.691 Got JSON-RPC error response 00:40:36.691 response: 00:40:36.691 { 00:40:36.691 "code": -32602, 00:40:36.691 "message": "Invalid parameters" 00:40:36.691 } 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:36.691 14:59:48 keyring_file -- keyring/file.sh@47 -- # bperfpid=1874397 00:40:36.691 14:59:48 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:40:36.691 14:59:48 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1874397 /var/tmp/bperf.sock 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1874397 ']' 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:36.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:36.691 14:59:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:36.691 [2024-11-20 14:59:48.473145] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:40:36.692 [2024-11-20 14:59:48.473188] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1874397 ] 00:40:36.692 [2024-11-20 14:59:48.547677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:36.692 [2024-11-20 14:59:48.590589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:36.951 14:59:48 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:36.951 14:59:48 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:36.951 14:59:48 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WUiAGqWviV 00:40:36.951 14:59:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WUiAGqWviV 00:40:36.951 14:59:48 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.m02atIZ6DM 00:40:36.951 14:59:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.m02atIZ6DM 00:40:37.211 14:59:49 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:40:37.211 14:59:49 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:40:37.211 14:59:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:37.211 14:59:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:37.211 14:59:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:37.470 14:59:49 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.WUiAGqWviV == \/\t\m\p\/\t\m\p\.\W\U\i\A\G\q\W\v\i\V ]] 00:40:37.470 14:59:49 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:40:37.470 14:59:49 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:40:37.470 14:59:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:37.470 14:59:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:37.470 14:59:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:37.729 14:59:49 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.m02atIZ6DM == \/\t\m\p\/\t\m\p\.\m\0\2\a\t\I\Z\6\D\M ]] 00:40:37.729 14:59:49 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:40:37.729 14:59:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:37.729 14:59:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:37.729 14:59:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:37.729 14:59:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:37.729 14:59:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:37.729 14:59:49 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:40:37.729 14:59:49 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:40:37.729 14:59:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:37.729 14:59:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:37.729 14:59:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:37.729 14:59:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:37.729 14:59:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:37.987 14:59:49 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:40:37.987 14:59:49 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:37.987 14:59:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:38.246 [2024-11-20 14:59:50.046025] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:38.246 nvme0n1 00:40:38.246 14:59:50 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:40:38.246 14:59:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:38.246 14:59:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:38.246 14:59:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:38.246 14:59:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:38.246 14:59:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:38.505 14:59:50 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:40:38.505 14:59:50 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:40:38.505 14:59:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:38.505 14:59:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:38.505 14:59:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:38.505 14:59:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:38.505 14:59:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:38.764 14:59:50 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:40:38.764 14:59:50 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:38.764 Running I/O for 1 seconds... 00:40:39.699 18667.00 IOPS, 72.92 MiB/s 00:40:39.699 Latency(us) 00:40:39.699 [2024-11-20T13:59:51.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:39.699 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:40:39.699 nvme0n1 : 1.00 18713.70 73.10 0.00 0.00 6826.66 2778.16 13392.14 00:40:39.699 [2024-11-20T13:59:51.657Z] =================================================================================================================== 00:40:39.699 [2024-11-20T13:59:51.657Z] Total : 18713.70 73.10 0.00 0.00 6826.66 2778.16 13392.14 00:40:39.699 { 00:40:39.699 "results": [ 00:40:39.699 { 00:40:39.699 "job": "nvme0n1", 00:40:39.699 "core_mask": "0x2", 00:40:39.699 "workload": "randrw", 00:40:39.699 "percentage": 50, 00:40:39.699 "status": "finished", 00:40:39.699 "queue_depth": 128, 00:40:39.699 "io_size": 4096, 00:40:39.699 "runtime": 1.004398, 00:40:39.699 "iops": 18713.6971598908, 00:40:39.699 "mibps": 73.10037953082343, 00:40:39.699 "io_failed": 0, 00:40:39.699 "io_timeout": 0, 00:40:39.699 "avg_latency_us": 6826.657444599684, 00:40:39.699 "min_latency_us": 2778.1565217391303, 00:40:39.699 "max_latency_us": 13392.139130434784 00:40:39.699 } 00:40:39.699 ], 00:40:39.699 "core_count": 1 00:40:39.699 } 00:40:39.699 14:59:51 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:39.699 14:59:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:39.957 14:59:51 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:40:39.957 14:59:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:39.957 14:59:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:39.957 14:59:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:39.957 14:59:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:39.957 14:59:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:40.221 14:59:52 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:40:40.221 14:59:52 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:40:40.221 14:59:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:40.221 14:59:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:40.221 14:59:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:40.221 14:59:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:40.222 14:59:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:40.484 14:59:52 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:40:40.484 14:59:52 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:40.484 14:59:52 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:40.484 14:59:52 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:40.484 14:59:52 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:40.484 14:59:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:40.484 14:59:52 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:40.484 14:59:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:40.484 14:59:52 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:40.484 14:59:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:40.742 [2024-11-20 14:59:52.444043] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:40.742 [2024-11-20 14:59:52.444474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17731f0 (107): Transport endpoint is not connected 00:40:40.742 [2024-11-20 14:59:52.445470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17731f0 (9): Bad file descriptor 00:40:40.742 [2024-11-20 14:59:52.446471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:40:40.742 [2024-11-20 14:59:52.446485] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:40.742 [2024-11-20 14:59:52.446493] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:40.742 [2024-11-20 14:59:52.446505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:40:40.742 request: 00:40:40.742 { 00:40:40.742 "name": "nvme0", 00:40:40.742 "trtype": "tcp", 00:40:40.742 "traddr": "127.0.0.1", 00:40:40.742 "adrfam": "ipv4", 00:40:40.742 "trsvcid": "4420", 00:40:40.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:40.742 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:40.742 "prchk_reftag": false, 00:40:40.742 "prchk_guard": false, 00:40:40.742 "hdgst": false, 00:40:40.742 "ddgst": false, 00:40:40.742 "psk": "key1", 00:40:40.742 "allow_unrecognized_csi": false, 00:40:40.742 "method": "bdev_nvme_attach_controller", 00:40:40.742 "req_id": 1 00:40:40.742 } 00:40:40.742 Got JSON-RPC error response 00:40:40.742 response: 00:40:40.742 { 00:40:40.742 "code": -5, 00:40:40.742 "message": "Input/output error" 00:40:40.742 } 00:40:40.742 14:59:52 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:40.742 14:59:52 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:40.742 14:59:52 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:40.742 14:59:52 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:40.742 14:59:52 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:40:40.742 14:59:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:40.742 14:59:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:40.742 14:59:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:40.743 14:59:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:40.743 14:59:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:40.743 14:59:52 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:40:40.743 14:59:52 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:40:40.743 14:59:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:40.743 14:59:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:40.743 14:59:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:40.743 14:59:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:40.743 14:59:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:41.001 14:59:52 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:40:41.001 14:59:52 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:40:41.001 14:59:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:41.260 14:59:53 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:40:41.260 14:59:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:40:41.518 14:59:53 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:40:41.518 14:59:53 keyring_file -- keyring/file.sh@78 -- # jq length 00:40:41.518 14:59:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:41.518 14:59:53 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:40:41.518 14:59:53 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.WUiAGqWviV 00:40:41.518 14:59:53 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.WUiAGqWviV 00:40:41.518 14:59:53 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:41.519 14:59:53 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.WUiAGqWviV 00:40:41.519 14:59:53 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:41.519 14:59:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:41.519 14:59:53 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:41.519 14:59:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:41.519 14:59:53 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WUiAGqWviV 00:40:41.519 14:59:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WUiAGqWviV 00:40:41.777 [2024-11-20 14:59:53.645669] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.WUiAGqWviV': 0100660 00:40:41.777 [2024-11-20 14:59:53.645696] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:40:41.777 request: 00:40:41.777 { 00:40:41.777 "name": "key0", 00:40:41.777 "path": "/tmp/tmp.WUiAGqWviV", 00:40:41.777 "method": "keyring_file_add_key", 00:40:41.777 "req_id": 1 00:40:41.777 } 00:40:41.777 Got JSON-RPC error response 00:40:41.777 response: 00:40:41.777 { 00:40:41.777 "code": -1, 00:40:41.777 "message": "Operation not permitted" 00:40:41.777 } 00:40:41.777 14:59:53 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:41.777 14:59:53 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:41.777 14:59:53 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:41.777 14:59:53 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:41.777 14:59:53 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.WUiAGqWviV 00:40:41.777 14:59:53 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WUiAGqWviV 00:40:41.777 14:59:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WUiAGqWviV 00:40:42.036 14:59:53 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.WUiAGqWviV 00:40:42.036 14:59:53 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:40:42.036 14:59:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:42.036 14:59:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:42.036 14:59:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:42.036 14:59:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:42.036 14:59:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:42.294 14:59:54 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:40:42.294 14:59:54 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:42.294 14:59:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:42.295 14:59:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:42.295 14:59:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:42.295 14:59:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:42.295 14:59:54 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:42.295 14:59:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:42.295 14:59:54 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:42.295 14:59:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:42.553 [2024-11-20 14:59:54.259296] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.WUiAGqWviV': No such file or directory 00:40:42.553 [2024-11-20 14:59:54.259318] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:40:42.553 [2024-11-20 14:59:54.259335] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:40:42.553 [2024-11-20 14:59:54.259358] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:40:42.553 [2024-11-20 14:59:54.259365] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:42.553 [2024-11-20 14:59:54.259372] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:40:42.553 request: 00:40:42.553 { 00:40:42.553 "name": "nvme0", 00:40:42.553 "trtype": "tcp", 00:40:42.553 "traddr": "127.0.0.1", 00:40:42.553 "adrfam": "ipv4", 00:40:42.553 "trsvcid": "4420", 00:40:42.553 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:42.553 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:42.553 "prchk_reftag": false, 00:40:42.553 "prchk_guard": false, 00:40:42.553 "hdgst": false, 00:40:42.553 "ddgst": false, 00:40:42.553 "psk": "key0", 00:40:42.553 "allow_unrecognized_csi": false, 00:40:42.553 "method": "bdev_nvme_attach_controller", 00:40:42.553 "req_id": 1 00:40:42.553 } 00:40:42.553 Got JSON-RPC error response 00:40:42.553 response: 00:40:42.553 { 00:40:42.553 "code": -19, 00:40:42.553 "message": "No such device" 00:40:42.553 } 00:40:42.553 14:59:54 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:42.553 14:59:54 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:42.553 14:59:54 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:42.553 14:59:54 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:42.553 14:59:54 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:40:42.553 14:59:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:42.553 14:59:54 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:42.553 14:59:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:42.553 14:59:54 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:42.553 14:59:54 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:42.553 14:59:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:42.553 14:59:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:42.553 14:59:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5jd4tOJ4YJ 00:40:42.553 14:59:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:42.553 14:59:54 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:42.553 14:59:54 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:42.554 14:59:54 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:42.554 14:59:54 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:42.554 14:59:54 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:42.554 14:59:54 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:42.812 14:59:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5jd4tOJ4YJ 00:40:42.812 14:59:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5jd4tOJ4YJ 00:40:42.812 14:59:54 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.5jd4tOJ4YJ 00:40:42.812 14:59:54 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5jd4tOJ4YJ 00:40:42.812 14:59:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5jd4tOJ4YJ 00:40:42.812 14:59:54 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:42.812 14:59:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:43.070 nvme0n1 00:40:43.070 14:59:54 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:40:43.070 14:59:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:43.070 14:59:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:43.070 14:59:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:43.070 14:59:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:43.071 14:59:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:43.329 14:59:55 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:40:43.329 14:59:55 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:40:43.329 14:59:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:43.587 14:59:55 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:40:43.587 14:59:55 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:40:43.587 14:59:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:43.587 14:59:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:43.587 14:59:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:43.845 14:59:55 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:40:43.845 14:59:55 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:40:43.845 14:59:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:43.845 14:59:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:43.845 14:59:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:43.845 14:59:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:43.845 14:59:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:43.845 14:59:55 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:40:43.845 14:59:55 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:43.845 14:59:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:44.104 14:59:55 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:40:44.104 14:59:55 keyring_file -- keyring/file.sh@105 -- # jq length 00:40:44.104 14:59:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:44.362 14:59:56 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:40:44.362 14:59:56 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5jd4tOJ4YJ 00:40:44.362 14:59:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5jd4tOJ4YJ 00:40:44.620 14:59:56 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.m02atIZ6DM 00:40:44.620 14:59:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.m02atIZ6DM 00:40:44.878 14:59:56 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:44.878 14:59:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:44.878 nvme0n1 00:40:45.137 14:59:56 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:40:45.137 14:59:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:40:45.396 14:59:57 keyring_file -- keyring/file.sh@113 -- # config='{ 00:40:45.396 "subsystems": [ 00:40:45.396 { 00:40:45.396 "subsystem": "keyring", 00:40:45.396 "config": [ 00:40:45.396 { 00:40:45.396 "method": "keyring_file_add_key", 00:40:45.396 "params": { 00:40:45.396 "name": "key0", 00:40:45.396 "path": "/tmp/tmp.5jd4tOJ4YJ" 00:40:45.396 } 00:40:45.396 }, 00:40:45.396 { 00:40:45.396 "method": "keyring_file_add_key", 00:40:45.396 "params": { 00:40:45.396 "name": "key1", 00:40:45.396 "path": "/tmp/tmp.m02atIZ6DM" 00:40:45.396 } 00:40:45.396 } 00:40:45.396 ] 00:40:45.396 }, 00:40:45.396 { 00:40:45.396 "subsystem": "iobuf", 00:40:45.396 "config": [ 00:40:45.396 { 00:40:45.396 "method": "iobuf_set_options", 00:40:45.396 "params": { 00:40:45.396 "small_pool_count": 8192, 00:40:45.396 "large_pool_count": 1024, 00:40:45.396 "small_bufsize": 8192, 00:40:45.396 "large_bufsize": 135168, 00:40:45.396 "enable_numa": false 00:40:45.396 } 00:40:45.396 } 00:40:45.396 ] 00:40:45.396 }, 00:40:45.396 { 00:40:45.396 "subsystem": "sock", 00:40:45.396 "config": [ 00:40:45.396 { 00:40:45.396 "method": "sock_set_default_impl", 00:40:45.396 "params": { 00:40:45.396 "impl_name": "posix" 00:40:45.396 } 00:40:45.396 }, 00:40:45.396 { 00:40:45.396 "method": "sock_impl_set_options", 00:40:45.396 "params": { 00:40:45.396 "impl_name": "ssl", 00:40:45.396 "recv_buf_size": 4096, 00:40:45.396 "send_buf_size": 4096, 00:40:45.396 "enable_recv_pipe": true, 00:40:45.396 "enable_quickack": false, 00:40:45.396 "enable_placement_id": 0, 00:40:45.396 "enable_zerocopy_send_server": true, 00:40:45.396 "enable_zerocopy_send_client": false, 00:40:45.396 "zerocopy_threshold": 0, 00:40:45.396 "tls_version": 0, 00:40:45.396 "enable_ktls": false 00:40:45.396 } 00:40:45.396 }, 00:40:45.396 { 00:40:45.396 "method": "sock_impl_set_options", 00:40:45.396 "params": { 00:40:45.396 "impl_name": "posix", 00:40:45.396 "recv_buf_size": 2097152, 00:40:45.396 "send_buf_size": 2097152, 00:40:45.396 "enable_recv_pipe": true, 00:40:45.396 "enable_quickack": false, 00:40:45.397 "enable_placement_id": 0, 00:40:45.397 "enable_zerocopy_send_server": true, 00:40:45.397 "enable_zerocopy_send_client": false, 00:40:45.397 "zerocopy_threshold": 0, 00:40:45.397 "tls_version": 0, 00:40:45.397 "enable_ktls": false 00:40:45.397 } 00:40:45.397 } 00:40:45.397 ] 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "subsystem": "vmd", 00:40:45.397 "config": [] 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "subsystem": "accel", 00:40:45.397 "config": [ 00:40:45.397 { 00:40:45.397 "method": "accel_set_options", 00:40:45.397 "params": { 00:40:45.397 "small_cache_size": 128, 00:40:45.397 "large_cache_size": 16, 00:40:45.397 "task_count": 2048, 00:40:45.397 "sequence_count": 2048, 00:40:45.397 "buf_count": 2048 00:40:45.397 } 00:40:45.397 } 00:40:45.397 ] 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "subsystem": "bdev", 00:40:45.397 "config": [ 00:40:45.397 { 00:40:45.397 "method": "bdev_set_options", 00:40:45.397 "params": { 00:40:45.397 "bdev_io_pool_size": 65535, 00:40:45.397 "bdev_io_cache_size": 256, 00:40:45.397 "bdev_auto_examine": true, 00:40:45.397 "iobuf_small_cache_size": 128, 00:40:45.397 "iobuf_large_cache_size": 16 00:40:45.397 } 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "method": "bdev_raid_set_options", 00:40:45.397 "params": { 00:40:45.397 "process_window_size_kb": 1024, 00:40:45.397 "process_max_bandwidth_mb_sec": 0 00:40:45.397 } 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "method": "bdev_iscsi_set_options", 00:40:45.397 "params": { 00:40:45.397 "timeout_sec": 30 00:40:45.397 } 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "method": "bdev_nvme_set_options", 00:40:45.397 "params": { 00:40:45.397 "action_on_timeout": "none", 00:40:45.397 "timeout_us": 0, 00:40:45.397 "timeout_admin_us": 0, 00:40:45.397 "keep_alive_timeout_ms": 10000, 00:40:45.397 "arbitration_burst": 0, 00:40:45.397 "low_priority_weight": 0, 00:40:45.397 "medium_priority_weight": 0, 00:40:45.397 "high_priority_weight": 0, 00:40:45.397 "nvme_adminq_poll_period_us": 10000, 00:40:45.397 "nvme_ioq_poll_period_us": 0, 00:40:45.397 "io_queue_requests": 512, 00:40:45.397 "delay_cmd_submit": true, 00:40:45.397 "transport_retry_count": 4, 00:40:45.397 "bdev_retry_count": 3, 00:40:45.397 "transport_ack_timeout": 0, 00:40:45.397 "ctrlr_loss_timeout_sec": 0, 00:40:45.397 "reconnect_delay_sec": 0, 00:40:45.397 "fast_io_fail_timeout_sec": 0, 00:40:45.397 "disable_auto_failback": false, 00:40:45.397 "generate_uuids": false, 00:40:45.397 "transport_tos": 0, 00:40:45.397 "nvme_error_stat": false, 00:40:45.397 "rdma_srq_size": 0, 00:40:45.397 "io_path_stat": false, 00:40:45.397 "allow_accel_sequence": false, 00:40:45.397 "rdma_max_cq_size": 0, 00:40:45.397 "rdma_cm_event_timeout_ms": 0, 00:40:45.397 "dhchap_digests": [ 00:40:45.397 "sha256", 00:40:45.397 "sha384", 00:40:45.397 "sha512" 00:40:45.397 ], 00:40:45.397 "dhchap_dhgroups": [ 00:40:45.397 "null", 00:40:45.397 "ffdhe2048", 00:40:45.397 "ffdhe3072", 00:40:45.397 "ffdhe4096", 00:40:45.397 "ffdhe6144", 00:40:45.397 "ffdhe8192" 00:40:45.397 ] 00:40:45.397 } 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "method": "bdev_nvme_attach_controller", 00:40:45.397 "params": { 00:40:45.397 "name": "nvme0", 00:40:45.397 "trtype": "TCP", 00:40:45.397 "adrfam": "IPv4", 00:40:45.397 "traddr": "127.0.0.1", 00:40:45.397 "trsvcid": "4420", 00:40:45.397 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:45.397 "prchk_reftag": false, 00:40:45.397 "prchk_guard": false, 00:40:45.397 "ctrlr_loss_timeout_sec": 0, 00:40:45.397 "reconnect_delay_sec": 0, 00:40:45.397 "fast_io_fail_timeout_sec": 0, 00:40:45.397 "psk": "key0", 00:40:45.397 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:45.397 "hdgst": false, 00:40:45.397 "ddgst": false, 00:40:45.397 "multipath": "multipath" 00:40:45.397 } 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "method": "bdev_nvme_set_hotplug", 00:40:45.397 "params": { 00:40:45.397 "period_us": 100000, 00:40:45.397 "enable": false 00:40:45.397 } 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "method": "bdev_wait_for_examine" 00:40:45.397 } 00:40:45.397 ] 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "subsystem": "nbd", 00:40:45.397 "config": [] 00:40:45.397 } 00:40:45.397 ] 00:40:45.397 }' 00:40:45.397 14:59:57 keyring_file -- keyring/file.sh@115 -- # killprocess 1874397 00:40:45.397 14:59:57 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1874397 ']' 00:40:45.397 14:59:57 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1874397 00:40:45.397 14:59:57 keyring_file -- common/autotest_common.sh@959 -- # uname 00:40:45.397 14:59:57 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:45.397 14:59:57 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1874397 00:40:45.397 14:59:57 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:45.397 14:59:57 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:45.397 14:59:57 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1874397' 00:40:45.397 killing process with pid 1874397 00:40:45.397 14:59:57 keyring_file -- common/autotest_common.sh@973 -- # kill 1874397 00:40:45.397 Received shutdown signal, test time was about 1.000000 seconds 00:40:45.397 00:40:45.397 Latency(us) 00:40:45.397 [2024-11-20T13:59:57.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:45.397 [2024-11-20T13:59:57.355Z] =================================================================================================================== 00:40:45.397 [2024-11-20T13:59:57.355Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:45.397 14:59:57 keyring_file -- common/autotest_common.sh@978 -- # wait 1874397 00:40:45.397 14:59:57 keyring_file -- keyring/file.sh@118 -- # bperfpid=1875963 00:40:45.397 14:59:57 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1875963 /var/tmp/bperf.sock 00:40:45.397 14:59:57 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1875963 ']' 00:40:45.397 14:59:57 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:45.397 14:59:57 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:40:45.397 14:59:57 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:45.397 14:59:57 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:45.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:45.397 14:59:57 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:40:45.397 "subsystems": [ 00:40:45.397 { 00:40:45.397 "subsystem": "keyring", 00:40:45.397 "config": [ 00:40:45.397 { 00:40:45.397 "method": "keyring_file_add_key", 00:40:45.397 "params": { 00:40:45.397 "name": "key0", 00:40:45.397 "path": "/tmp/tmp.5jd4tOJ4YJ" 00:40:45.397 } 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "method": "keyring_file_add_key", 00:40:45.397 "params": { 00:40:45.397 "name": "key1", 00:40:45.397 "path": "/tmp/tmp.m02atIZ6DM" 00:40:45.397 } 00:40:45.397 } 00:40:45.397 ] 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "subsystem": "iobuf", 00:40:45.397 "config": [ 00:40:45.397 { 00:40:45.397 "method": "iobuf_set_options", 00:40:45.397 "params": { 00:40:45.397 "small_pool_count": 8192, 00:40:45.397 "large_pool_count": 1024, 00:40:45.397 "small_bufsize": 8192, 00:40:45.397 "large_bufsize": 135168, 00:40:45.397 "enable_numa": false 00:40:45.397 } 00:40:45.397 } 00:40:45.397 ] 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "subsystem": "sock", 00:40:45.397 "config": [ 00:40:45.397 { 00:40:45.397 "method": "sock_set_default_impl", 00:40:45.397 "params": { 00:40:45.397 "impl_name": "posix" 00:40:45.397 } 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "method": "sock_impl_set_options", 00:40:45.397 "params": { 00:40:45.397 "impl_name": "ssl", 00:40:45.397 "recv_buf_size": 4096, 00:40:45.397 "send_buf_size": 4096, 00:40:45.397 "enable_recv_pipe": true, 00:40:45.397 "enable_quickack": false, 00:40:45.397 "enable_placement_id": 0, 00:40:45.397 "enable_zerocopy_send_server": true, 00:40:45.397 "enable_zerocopy_send_client": false, 00:40:45.397 "zerocopy_threshold": 0, 00:40:45.397 "tls_version": 0, 00:40:45.397 "enable_ktls": false 00:40:45.397 } 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "method": "sock_impl_set_options", 00:40:45.397 "params": { 00:40:45.397 "impl_name": "posix", 00:40:45.397 "recv_buf_size": 2097152, 00:40:45.397 "send_buf_size": 2097152, 00:40:45.397 "enable_recv_pipe": true, 00:40:45.397 "enable_quickack": false, 00:40:45.397 "enable_placement_id": 0, 00:40:45.397 "enable_zerocopy_send_server": true, 00:40:45.397 "enable_zerocopy_send_client": false, 00:40:45.397 "zerocopy_threshold": 0, 00:40:45.397 "tls_version": 0, 00:40:45.397 "enable_ktls": false 00:40:45.397 } 00:40:45.397 } 00:40:45.397 ] 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "subsystem": "vmd", 00:40:45.397 "config": [] 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "subsystem": "accel", 00:40:45.397 "config": [ 00:40:45.397 { 00:40:45.397 "method": "accel_set_options", 00:40:45.397 "params": { 00:40:45.397 "small_cache_size": 128, 00:40:45.397 "large_cache_size": 16, 00:40:45.397 "task_count": 2048, 00:40:45.397 "sequence_count": 2048, 00:40:45.397 "buf_count": 2048 00:40:45.397 } 00:40:45.397 } 00:40:45.397 ] 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "subsystem": "bdev", 00:40:45.397 "config": [ 00:40:45.397 { 00:40:45.397 "method": "bdev_set_options", 00:40:45.397 "params": { 00:40:45.397 "bdev_io_pool_size": 65535, 00:40:45.397 "bdev_io_cache_size": 256, 00:40:45.397 "bdev_auto_examine": true, 00:40:45.397 "iobuf_small_cache_size": 128, 00:40:45.397 "iobuf_large_cache_size": 16 00:40:45.397 } 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "method": "bdev_raid_set_options", 00:40:45.397 "params": { 00:40:45.397 "process_window_size_kb": 1024, 00:40:45.397 "process_max_bandwidth_mb_sec": 0 00:40:45.397 } 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "method": "bdev_iscsi_set_options", 00:40:45.397 "params": { 00:40:45.397 "timeout_sec": 30 00:40:45.397 } 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "method": "bdev_nvme_set_options", 00:40:45.397 "params": { 00:40:45.397 "action_on_timeout": "none", 00:40:45.397 "timeout_us": 0, 00:40:45.397 "timeout_admin_us": 0, 00:40:45.397 "keep_alive_timeout_ms": 10000, 00:40:45.397 "arbitration_burst": 0, 00:40:45.397 "low_priority_weight": 0, 00:40:45.397 "medium_priority_weight": 0, 00:40:45.397 "high_priority_weight": 0, 00:40:45.397 "nvme_adminq_poll_period_us": 10000, 00:40:45.397 "nvme_ioq_poll_period_us": 0, 00:40:45.397 "io_queue_requests": 512, 00:40:45.397 "delay_cmd_submit": true, 00:40:45.397 "transport_retry_count": 4, 00:40:45.397 "bdev_retry_count": 3, 00:40:45.397 "transport_ack_timeout": 0, 00:40:45.397 "ctrlr_loss_timeout_sec": 0, 00:40:45.397 "reconnect_delay_sec": 0, 00:40:45.397 "fast_io_fail_timeout_sec": 0, 00:40:45.397 "disable_auto_failback": false, 00:40:45.397 "generate_uuids": false, 00:40:45.397 "transport_tos": 0, 00:40:45.397 "nvme_error_stat": false, 00:40:45.397 "rdma_srq_size": 0, 00:40:45.397 "io_path_stat": false, 00:40:45.397 "allow_accel_sequence": false, 00:40:45.397 "rdma_max_cq_size": 0, 00:40:45.397 "rdma_cm_event_timeout_ms": 0, 00:40:45.397 "dhchap_digests": [ 00:40:45.397 "sha256", 00:40:45.397 "sha384", 00:40:45.397 "sha512" 00:40:45.397 ], 00:40:45.397 "dhchap_dhgroups": [ 00:40:45.397 "null", 00:40:45.397 "ffdhe2048", 00:40:45.397 "ffdhe3072", 00:40:45.397 "ffdhe4096", 00:40:45.397 "ffdhe6144", 00:40:45.397 "ffdhe8192" 00:40:45.397 ] 00:40:45.397 } 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "method": "bdev_nvme_attach_controller", 00:40:45.397 "params": { 00:40:45.397 "name": "nvme0", 00:40:45.397 "trtype": "TCP", 00:40:45.397 "adrfam": "IPv4", 00:40:45.397 "traddr": "127.0.0.1", 00:40:45.397 "trsvcid": "4420", 00:40:45.397 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:45.397 "prchk_reftag": false, 00:40:45.397 "prchk_guard": false, 00:40:45.397 "ctrlr_loss_timeout_sec": 0, 00:40:45.397 "reconnect_delay_sec": 0, 00:40:45.397 "fast_io_fail_timeout_sec": 0, 00:40:45.397 "psk": "key0", 00:40:45.397 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:45.397 "hdgst": false, 00:40:45.397 "ddgst": false, 00:40:45.397 "multipath": "multipath" 00:40:45.397 } 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "method": "bdev_nvme_set_hotplug", 00:40:45.397 "params": { 00:40:45.397 "period_us": 100000, 00:40:45.397 "enable": false 00:40:45.397 } 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "method": "bdev_wait_for_examine" 00:40:45.397 } 00:40:45.397 ] 00:40:45.397 }, 00:40:45.397 { 00:40:45.397 "subsystem": "nbd", 00:40:45.397 "config": [] 00:40:45.398 } 00:40:45.398 ] 00:40:45.398 }' 00:40:45.398 14:59:57 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:45.398 14:59:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:45.656 [2024-11-20 14:59:57.376025] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:40:45.657 [2024-11-20 14:59:57.376072] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1875963 ] 00:40:45.657 [2024-11-20 14:59:57.450283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:45.657 [2024-11-20 14:59:57.492628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:45.915 [2024-11-20 14:59:57.654341] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:46.482 14:59:58 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:46.482 14:59:58 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:46.482 14:59:58 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:40:46.482 14:59:58 keyring_file -- keyring/file.sh@121 -- # jq length 00:40:46.482 14:59:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:46.482 14:59:58 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:40:46.482 14:59:58 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:40:46.482 14:59:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:46.482 14:59:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:46.482 14:59:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:46.482 14:59:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:46.482 14:59:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:46.741 14:59:58 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:40:46.741 14:59:58 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:40:46.741 14:59:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:46.741 14:59:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:46.741 14:59:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:46.741 14:59:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:46.741 14:59:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:46.999 14:59:58 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:40:46.999 14:59:58 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:40:46.999 14:59:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:40:46.999 14:59:58 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:40:47.259 14:59:59 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:40:47.259 14:59:59 keyring_file -- keyring/file.sh@1 -- # cleanup 00:40:47.259 14:59:59 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.5jd4tOJ4YJ /tmp/tmp.m02atIZ6DM 00:40:47.259 14:59:59 keyring_file -- keyring/file.sh@20 -- # killprocess 1875963 00:40:47.259 14:59:59 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1875963 ']' 00:40:47.259 14:59:59 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1875963 00:40:47.259 14:59:59 keyring_file -- common/autotest_common.sh@959 -- # uname 00:40:47.259 14:59:59 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:47.259 14:59:59 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1875963 00:40:47.259 14:59:59 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:47.259 14:59:59 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:47.259 14:59:59 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1875963' 00:40:47.259 killing process with pid 1875963 00:40:47.259 14:59:59 keyring_file -- common/autotest_common.sh@973 -- # kill 1875963 00:40:47.259 Received shutdown signal, test time was about 1.000000 seconds 00:40:47.259 00:40:47.259 Latency(us) 00:40:47.259 [2024-11-20T13:59:59.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:47.259 [2024-11-20T13:59:59.217Z] =================================================================================================================== 00:40:47.259 [2024-11-20T13:59:59.217Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:47.259 14:59:59 keyring_file -- common/autotest_common.sh@978 -- # wait 1875963 00:40:47.518 14:59:59 keyring_file -- keyring/file.sh@21 -- # killprocess 1874236 00:40:47.518 14:59:59 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1874236 ']' 00:40:47.518 14:59:59 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1874236 00:40:47.518 14:59:59 keyring_file -- common/autotest_common.sh@959 -- # uname 00:40:47.518 14:59:59 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:47.518 14:59:59 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1874236 00:40:47.518 14:59:59 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:47.518 14:59:59 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:47.518 14:59:59 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1874236' 00:40:47.518 killing process with pid 1874236 00:40:47.518 14:59:59 keyring_file -- common/autotest_common.sh@973 -- # kill 1874236 00:40:47.518 14:59:59 keyring_file -- common/autotest_common.sh@978 -- # wait 1874236 00:40:47.776 00:40:47.776 real 0m12.427s 00:40:47.776 user 0m30.241s 00:40:47.776 sys 0m2.793s 00:40:47.776 14:59:59 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:47.777 14:59:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:47.777 ************************************ 00:40:47.777 END TEST keyring_file 00:40:47.777 ************************************ 00:40:47.777 14:59:59 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:40:47.777 14:59:59 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:47.777 14:59:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:47.777 14:59:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:47.777 14:59:59 -- common/autotest_common.sh@10 -- # set +x 00:40:47.777 ************************************ 00:40:47.777 START TEST keyring_linux 00:40:47.777 ************************************ 00:40:47.777 14:59:59 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:47.777 Joined session keyring: 433739543 00:40:47.777 * Looking for test storage... 00:40:47.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:47.777 14:59:59 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:47.777 14:59:59 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:40:47.777 14:59:59 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:48.037 14:59:59 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@345 -- # : 1 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@368 -- # return 0 00:40:48.037 14:59:59 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:48.037 14:59:59 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:48.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:48.037 --rc genhtml_branch_coverage=1 00:40:48.037 --rc genhtml_function_coverage=1 00:40:48.037 --rc genhtml_legend=1 00:40:48.037 --rc geninfo_all_blocks=1 00:40:48.037 --rc geninfo_unexecuted_blocks=1 00:40:48.037 00:40:48.037 ' 00:40:48.037 14:59:59 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:48.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:48.037 --rc genhtml_branch_coverage=1 00:40:48.037 --rc genhtml_function_coverage=1 00:40:48.037 --rc genhtml_legend=1 00:40:48.037 --rc geninfo_all_blocks=1 00:40:48.037 --rc geninfo_unexecuted_blocks=1 00:40:48.037 00:40:48.037 ' 00:40:48.037 14:59:59 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:48.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:48.037 --rc genhtml_branch_coverage=1 00:40:48.037 --rc genhtml_function_coverage=1 00:40:48.037 --rc genhtml_legend=1 00:40:48.037 --rc geninfo_all_blocks=1 00:40:48.037 --rc geninfo_unexecuted_blocks=1 00:40:48.037 00:40:48.037 ' 00:40:48.037 14:59:59 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:48.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:48.037 --rc genhtml_branch_coverage=1 00:40:48.037 --rc genhtml_function_coverage=1 00:40:48.037 --rc genhtml_legend=1 00:40:48.037 --rc geninfo_all_blocks=1 00:40:48.037 --rc geninfo_unexecuted_blocks=1 00:40:48.037 00:40:48.037 ' 00:40:48.037 14:59:59 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:48.037 14:59:59 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:48.037 14:59:59 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:48.037 14:59:59 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.037 14:59:59 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.037 14:59:59 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.037 14:59:59 keyring_linux -- paths/export.sh@5 -- # export PATH 00:40:48.037 14:59:59 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:48.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:48.037 14:59:59 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:48.037 14:59:59 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:48.037 14:59:59 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:48.037 14:59:59 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:48.038 14:59:59 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:40:48.038 14:59:59 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:40:48.038 14:59:59 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:40:48.038 14:59:59 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:40:48.038 14:59:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:48.038 14:59:59 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:40:48.038 14:59:59 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:48.038 14:59:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:48.038 14:59:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:40:48.038 14:59:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:48.038 14:59:59 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:48.038 14:59:59 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:40:48.038 14:59:59 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:48.038 14:59:59 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:48.038 14:59:59 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:40:48.038 14:59:59 keyring_linux -- nvmf/common.sh@733 -- # python - 00:40:48.038 14:59:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:40:48.038 14:59:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:40:48.038 /tmp/:spdk-test:key0 00:40:48.038 14:59:59 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:40:48.038 14:59:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:48.038 14:59:59 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:40:48.038 14:59:59 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:48.038 14:59:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:48.038 14:59:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:40:48.038 14:59:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:48.038 14:59:59 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:48.038 14:59:59 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:40:48.038 14:59:59 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:48.038 14:59:59 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:40:48.038 14:59:59 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:40:48.038 14:59:59 keyring_linux -- nvmf/common.sh@733 -- # python - 00:40:48.038 14:59:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:40:48.038 14:59:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:40:48.038 /tmp/:spdk-test:key1 00:40:48.038 14:59:59 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1876346 00:40:48.038 14:59:59 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1876346 00:40:48.038 14:59:59 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:48.038 14:59:59 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1876346 ']' 00:40:48.038 14:59:59 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:48.038 14:59:59 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:48.038 14:59:59 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:48.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:48.038 14:59:59 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:48.038 14:59:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:48.038 [2024-11-20 14:59:59.987423] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:40:48.038 [2024-11-20 14:59:59.987475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1876346 ] 00:40:48.297 [2024-11-20 15:00:00.064995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:48.297 [2024-11-20 15:00:00.112291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:48.555 15:00:00 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:48.555 15:00:00 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:40:48.555 15:00:00 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:40:48.556 15:00:00 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.556 15:00:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:48.556 [2024-11-20 15:00:00.331573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:48.556 null0 00:40:48.556 [2024-11-20 15:00:00.363621] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:48.556 [2024-11-20 15:00:00.364000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:48.556 15:00:00 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.556 15:00:00 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:40:48.556 766144984 00:40:48.556 15:00:00 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:40:48.556 495778375 00:40:48.556 15:00:00 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1876545 00:40:48.556 15:00:00 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1876545 /var/tmp/bperf.sock 00:40:48.556 15:00:00 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:40:48.556 15:00:00 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1876545 ']' 00:40:48.556 15:00:00 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:48.556 15:00:00 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:48.556 15:00:00 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:48.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:48.556 15:00:00 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:48.556 15:00:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:48.556 [2024-11-20 15:00:00.437919] Starting SPDK v25.01-pre git sha1 d2ebd983e / DPDK 24.03.0 initialization... 00:40:48.556 [2024-11-20 15:00:00.437977] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1876545 ] 00:40:48.814 [2024-11-20 15:00:00.512869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:48.814 [2024-11-20 15:00:00.555156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:48.814 15:00:00 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:48.814 15:00:00 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:40:48.814 15:00:00 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:48.814 15:00:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:49.073 15:00:00 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:49.073 15:00:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:49.331 15:00:01 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:49.331 15:00:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:49.331 [2024-11-20 15:00:01.217384] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:49.590 nvme0n1 00:40:49.590 15:00:01 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:49.590 15:00:01 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:49.590 15:00:01 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:49.590 15:00:01 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:49.590 15:00:01 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:49.590 15:00:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:49.590 15:00:01 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:49.590 15:00:01 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:49.590 15:00:01 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:49.590 15:00:01 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:49.590 15:00:01 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:49.590 15:00:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:49.590 15:00:01 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:49.848 15:00:01 keyring_linux -- keyring/linux.sh@25 -- # sn=766144984 00:40:49.848 15:00:01 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:49.848 15:00:01 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:49.848 15:00:01 keyring_linux -- keyring/linux.sh@26 -- # [[ 766144984 == \7\6\6\1\4\4\9\8\4 ]] 00:40:49.848 15:00:01 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 766144984 00:40:49.849 15:00:01 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:49.849 15:00:01 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:49.849 Running I/O for 1 seconds... 00:40:51.227 20922.00 IOPS, 81.73 MiB/s 00:40:51.227 Latency(us) 00:40:51.227 [2024-11-20T14:00:03.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:51.227 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:51.227 nvme0n1 : 1.01 20919.13 81.72 0.00 0.00 6097.60 2008.82 7693.36 00:40:51.227 [2024-11-20T14:00:03.185Z] =================================================================================================================== 00:40:51.227 [2024-11-20T14:00:03.185Z] Total : 20919.13 81.72 0.00 0.00 6097.60 2008.82 7693.36 00:40:51.227 { 00:40:51.227 "results": [ 00:40:51.227 { 00:40:51.227 "job": "nvme0n1", 00:40:51.227 "core_mask": "0x2", 00:40:51.227 "workload": "randread", 00:40:51.227 "status": "finished", 00:40:51.227 "queue_depth": 128, 00:40:51.227 "io_size": 4096, 00:40:51.227 "runtime": 1.006256, 00:40:51.227 "iops": 20919.129923200457, 00:40:51.227 "mibps": 81.71535126250178, 00:40:51.227 "io_failed": 0, 00:40:51.227 "io_timeout": 0, 00:40:51.227 "avg_latency_us": 6097.601283569142, 00:40:51.227 "min_latency_us": 2008.8208695652174, 00:40:51.227 "max_latency_us": 7693.356521739131 00:40:51.227 } 00:40:51.227 ], 00:40:51.227 "core_count": 1 00:40:51.227 } 00:40:51.227 15:00:02 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:51.227 15:00:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:51.227 15:00:03 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:51.227 15:00:03 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:51.227 15:00:03 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:51.227 15:00:03 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:51.227 15:00:03 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:51.227 15:00:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:51.487 15:00:03 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:51.487 15:00:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:51.487 15:00:03 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:51.487 15:00:03 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:51.487 15:00:03 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:40:51.487 15:00:03 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:51.487 15:00:03 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:51.487 15:00:03 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:51.487 15:00:03 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:51.487 15:00:03 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:51.487 15:00:03 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:51.487 15:00:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:51.487 [2024-11-20 15:00:03.413081] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:51.487 [2024-11-20 15:00:03.413778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af6f60 (107): Transport endpoint is not connected 00:40:51.487 [2024-11-20 15:00:03.414773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af6f60 (9): Bad file descriptor 00:40:51.487 [2024-11-20 15:00:03.415774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:40:51.487 [2024-11-20 15:00:03.415783] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:51.487 [2024-11-20 15:00:03.415790] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:51.487 [2024-11-20 15:00:03.415798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:40:51.487 request: 00:40:51.487 { 00:40:51.487 "name": "nvme0", 00:40:51.487 "trtype": "tcp", 00:40:51.487 "traddr": "127.0.0.1", 00:40:51.487 "adrfam": "ipv4", 00:40:51.487 "trsvcid": "4420", 00:40:51.487 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:51.487 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:51.487 "prchk_reftag": false, 00:40:51.487 "prchk_guard": false, 00:40:51.487 "hdgst": false, 00:40:51.487 "ddgst": false, 00:40:51.487 "psk": ":spdk-test:key1", 00:40:51.487 "allow_unrecognized_csi": false, 00:40:51.487 "method": "bdev_nvme_attach_controller", 00:40:51.487 "req_id": 1 00:40:51.487 } 00:40:51.487 Got JSON-RPC error response 00:40:51.487 response: 00:40:51.487 { 00:40:51.487 "code": -5, 00:40:51.487 "message": "Input/output error" 00:40:51.487 } 00:40:51.487 15:00:03 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:40:51.487 15:00:03 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:51.487 15:00:03 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:51.487 15:00:03 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:51.487 15:00:03 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:51.487 15:00:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:51.487 15:00:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:51.487 15:00:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:51.487 15:00:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:51.487 15:00:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:51.487 15:00:03 keyring_linux -- keyring/linux.sh@33 -- # sn=766144984 00:40:51.487 15:00:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 766144984 00:40:51.487 1 links removed 00:40:51.487 15:00:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:51.487 15:00:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:51.487 15:00:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:51.747 15:00:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:51.747 15:00:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:51.747 15:00:03 keyring_linux -- keyring/linux.sh@33 -- # sn=495778375 00:40:51.747 15:00:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 495778375 00:40:51.747 1 links removed 00:40:51.747 15:00:03 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1876545 00:40:51.747 15:00:03 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1876545 ']' 00:40:51.747 15:00:03 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1876545 00:40:51.747 15:00:03 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:40:51.747 15:00:03 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:51.747 15:00:03 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1876545 00:40:51.747 15:00:03 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:51.747 15:00:03 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:51.747 15:00:03 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1876545' 00:40:51.747 killing process with pid 1876545 00:40:51.747 15:00:03 keyring_linux -- common/autotest_common.sh@973 -- # kill 1876545 00:40:51.747 Received shutdown signal, test time was about 1.000000 seconds 00:40:51.747 00:40:51.747 Latency(us) 00:40:51.747 [2024-11-20T14:00:03.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:51.747 [2024-11-20T14:00:03.705Z] =================================================================================================================== 00:40:51.747 [2024-11-20T14:00:03.705Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:51.747 15:00:03 keyring_linux -- common/autotest_common.sh@978 -- # wait 1876545 00:40:51.747 15:00:03 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1876346 00:40:51.747 15:00:03 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1876346 ']' 00:40:51.747 15:00:03 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1876346 00:40:51.747 15:00:03 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:40:51.747 15:00:03 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:51.747 15:00:03 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1876346 00:40:52.006 15:00:03 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:52.006 15:00:03 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:52.006 15:00:03 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1876346' 00:40:52.006 killing process with pid 1876346 00:40:52.006 15:00:03 keyring_linux -- common/autotest_common.sh@973 -- # kill 1876346 00:40:52.006 15:00:03 keyring_linux -- common/autotest_common.sh@978 -- # wait 1876346 00:40:52.266 00:40:52.266 real 0m4.383s 00:40:52.266 user 0m8.236s 00:40:52.266 sys 0m1.463s 00:40:52.266 15:00:04 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:52.266 15:00:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:52.266 ************************************ 00:40:52.266 END TEST keyring_linux 00:40:52.266 ************************************ 00:40:52.266 15:00:04 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:40:52.266 15:00:04 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:40:52.266 15:00:04 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:40:52.266 15:00:04 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:40:52.266 15:00:04 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:40:52.266 15:00:04 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:40:52.266 15:00:04 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:40:52.266 15:00:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:40:52.266 15:00:04 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:40:52.266 15:00:04 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:40:52.266 15:00:04 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:40:52.266 15:00:04 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:40:52.266 15:00:04 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:40:52.266 15:00:04 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:40:52.266 15:00:04 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:40:52.266 15:00:04 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:40:52.266 15:00:04 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:40:52.266 15:00:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:52.266 15:00:04 -- common/autotest_common.sh@10 -- # set +x 00:40:52.266 15:00:04 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:40:52.266 15:00:04 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:40:52.266 15:00:04 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:40:52.266 15:00:04 -- common/autotest_common.sh@10 -- # set +x 00:40:57.545 INFO: APP EXITING 00:40:57.545 INFO: killing all VMs 00:40:57.545 INFO: killing vhost app 00:40:57.545 INFO: EXIT DONE 00:41:00.080 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:41:00.080 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:41:00.080 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:41:00.080 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:41:00.080 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:41:00.080 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:41:00.080 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:41:00.080 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:41:00.080 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:41:00.080 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:41:00.080 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:41:00.080 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:41:00.080 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:41:00.080 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:41:00.080 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:41:00.080 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:41:00.080 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:41:03.371 Cleaning 00:41:03.371 Removing: /var/run/dpdk/spdk0/config 00:41:03.371 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:41:03.371 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:41:03.371 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:41:03.371 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:41:03.371 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:41:03.371 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:41:03.371 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:41:03.371 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:41:03.371 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:41:03.371 Removing: /var/run/dpdk/spdk0/hugepage_info 00:41:03.371 Removing: /var/run/dpdk/spdk1/config 00:41:03.371 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:41:03.371 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:41:03.371 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:41:03.371 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:41:03.371 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:41:03.371 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:41:03.371 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:41:03.371 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:41:03.371 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:41:03.371 Removing: /var/run/dpdk/spdk1/hugepage_info 00:41:03.371 Removing: /var/run/dpdk/spdk2/config 00:41:03.371 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:41:03.371 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:41:03.371 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:41:03.371 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:41:03.371 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:41:03.371 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:41:03.371 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:41:03.371 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:41:03.371 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:41:03.371 Removing: /var/run/dpdk/spdk2/hugepage_info 00:41:03.371 Removing: /var/run/dpdk/spdk3/config 00:41:03.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:41:03.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:41:03.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:41:03.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:41:03.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:41:03.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:41:03.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:41:03.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:41:03.371 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:41:03.371 Removing: /var/run/dpdk/spdk3/hugepage_info 00:41:03.371 Removing: /var/run/dpdk/spdk4/config 00:41:03.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:41:03.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:41:03.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:41:03.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:41:03.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:41:03.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:41:03.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:41:03.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:41:03.371 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:41:03.371 Removing: /var/run/dpdk/spdk4/hugepage_info 00:41:03.371 Removing: /dev/shm/bdev_svc_trace.1 00:41:03.371 Removing: /dev/shm/nvmf_trace.0 00:41:03.371 Removing: /dev/shm/spdk_tgt_trace.pid1398168 00:41:03.371 Removing: /var/run/dpdk/spdk0 00:41:03.371 Removing: /var/run/dpdk/spdk1 00:41:03.371 Removing: /var/run/dpdk/spdk2 00:41:03.371 Removing: /var/run/dpdk/spdk3 00:41:03.371 Removing: /var/run/dpdk/spdk4 00:41:03.371 Removing: /var/run/dpdk/spdk_pid1396019 00:41:03.371 Removing: /var/run/dpdk/spdk_pid1397083 00:41:03.371 Removing: /var/run/dpdk/spdk_pid1398168 00:41:03.371 Removing: /var/run/dpdk/spdk_pid1398807 00:41:03.371 Removing: /var/run/dpdk/spdk_pid1399760 00:41:03.371 Removing: /var/run/dpdk/spdk_pid1399991 00:41:03.371 Removing: /var/run/dpdk/spdk_pid1400965 00:41:03.371 Removing: /var/run/dpdk/spdk_pid1400971 00:41:03.371 Removing: /var/run/dpdk/spdk_pid1401325 00:41:03.371 Removing: /var/run/dpdk/spdk_pid1402845 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1404211 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1404637 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1404834 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1405035 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1405312 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1405562 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1405814 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1406096 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1406840 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1409963 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1410098 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1410354 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1410372 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1410850 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1410866 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1411350 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1411472 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1411833 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1411844 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1412107 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1412115 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1412676 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1412926 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1413226 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1416928 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1421426 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1432055 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1432678 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1436949 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1437231 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1441701 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1447582 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1450196 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1460410 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1469414 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1471299 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1472613 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1489716 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1493780 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1539557 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1544947 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1550712 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1557024 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1557117 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1557902 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1558818 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1559731 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1560197 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1560272 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1560574 00:41:03.372 Removing: /var/run/dpdk/spdk_pid1560669 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1560671 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1561582 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1562495 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1563316 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1563881 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1563883 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1564119 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1565323 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1566381 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1574939 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1604138 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1608821 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1610424 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1612257 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1612390 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1612509 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1612745 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1613250 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1615030 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1615846 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1616337 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1618449 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1618936 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1619662 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1623756 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1629198 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1629200 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1629202 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1633098 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1643332 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1647206 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1653536 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1654805 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1656329 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1657626 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1662115 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1666357 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1670503 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1678116 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1678119 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1682580 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1682813 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1683037 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1683481 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1683492 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1687924 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1688488 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1692788 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1695320 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1701135 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1706636 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1715260 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1722239 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1722241 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1740831 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1741293 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1741902 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1742558 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1743288 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1744059 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1744616 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1745292 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1749489 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1749745 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1755675 00:41:03.632 Removing: /var/run/dpdk/spdk_pid1755790 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1761059 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1765200 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1774997 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1775485 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1779711 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1780123 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1784166 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1790063 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1792977 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1802834 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1811415 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1812988 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1813892 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1829859 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1833639 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1836412 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1844450 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1844472 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1849546 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1851374 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1853304 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1854522 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1856472 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1857529 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1866183 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1866647 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1867293 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1869540 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1869999 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1870454 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1874236 00:41:03.891 Removing: /var/run/dpdk/spdk_pid1874397 00:41:03.892 Removing: /var/run/dpdk/spdk_pid1875963 00:41:03.892 Removing: /var/run/dpdk/spdk_pid1876346 00:41:03.892 Removing: /var/run/dpdk/spdk_pid1876545 00:41:03.892 Clean 00:41:03.892 15:00:15 -- common/autotest_common.sh@1453 -- # return 0 00:41:03.892 15:00:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:41:03.892 15:00:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:03.892 15:00:15 -- common/autotest_common.sh@10 -- # set +x 00:41:03.892 15:00:15 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:41:03.892 15:00:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:03.892 15:00:15 -- common/autotest_common.sh@10 -- # set +x 00:41:04.151 15:00:15 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:04.151 15:00:15 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:41:04.151 15:00:15 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:41:04.151 15:00:15 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:41:04.151 15:00:15 -- spdk/autotest.sh@398 -- # hostname 00:41:04.151 15:00:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:41:04.151 geninfo: WARNING: invalid characters removed from testname! 00:41:26.087 15:00:36 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:27.465 15:00:39 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:29.368 15:00:41 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:31.286 15:00:43 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:33.191 15:00:45 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:35.101 15:00:46 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:37.009 15:00:48 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:41:37.009 15:00:48 -- spdk/autorun.sh@1 -- $ timing_finish 00:41:37.009 15:00:48 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:41:37.009 15:00:48 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:41:37.009 15:00:48 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:41:37.009 15:00:48 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:37.009 + [[ -n 1318566 ]] 00:41:37.009 + sudo kill 1318566 00:41:37.019 [Pipeline] } 00:41:37.034 [Pipeline] // stage 00:41:37.040 [Pipeline] } 00:41:37.055 [Pipeline] // timeout 00:41:37.060 [Pipeline] } 00:41:37.074 [Pipeline] // catchError 00:41:37.080 [Pipeline] } 00:41:37.095 [Pipeline] // wrap 00:41:37.102 [Pipeline] } 00:41:37.114 [Pipeline] // catchError 00:41:37.124 [Pipeline] stage 00:41:37.126 [Pipeline] { (Epilogue) 00:41:37.139 [Pipeline] catchError 00:41:37.141 [Pipeline] { 00:41:37.154 [Pipeline] echo 00:41:37.156 Cleanup processes 00:41:37.162 [Pipeline] sh 00:41:37.448 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:37.448 1883776 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:37.457 [Pipeline] sh 00:41:37.839 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:37.839 ++ grep -v 'sudo pgrep' 00:41:37.839 ++ awk '{print $1}' 00:41:37.839 + sudo kill -9 00:41:37.839 + true 00:41:37.897 [Pipeline] sh 00:41:38.182 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:50.402 [Pipeline] sh 00:41:50.687 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:50.687 Artifacts sizes are good 00:41:50.702 [Pipeline] archiveArtifacts 00:41:50.709 Archiving artifacts 00:41:50.847 [Pipeline] sh 00:41:51.130 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:51.145 [Pipeline] cleanWs 00:41:51.155 [WS-CLEANUP] Deleting project workspace... 00:41:51.155 [WS-CLEANUP] Deferred wipeout is used... 00:41:51.162 [WS-CLEANUP] done 00:41:51.165 [Pipeline] } 00:41:51.183 [Pipeline] // catchError 00:41:51.196 [Pipeline] sh 00:41:51.479 + logger -p user.info -t JENKINS-CI 00:41:51.488 [Pipeline] } 00:41:51.501 [Pipeline] // stage 00:41:51.506 [Pipeline] } 00:41:51.518 [Pipeline] // node 00:41:51.523 [Pipeline] End of Pipeline 00:41:51.555 Finished: SUCCESS